* [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver
@ 2026-04-30 7:01 liujie5
2026-04-30 7:01 ` [PATCH v1 1/9] mailmap: add Jie Liu liujie5
` (8 more replies)
0 siblings, 9 replies; 143+ messages in thread
From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch set implements core functionality for the SXE2 PMD,
including basic driver framework, data path setup.
V1:
- Add sxe2 adapter
Jie Liu (9):
mailmap: add Jie Liu
doc: add sxe2 guide and release notes
drivers: add sxe2 basic structures
common/sxe2: add base driver skeleton
drivers: add base driver probe skeleton
drivers: support PCI BAR mapping
common/sxe2: add ioctl interface for DMA map and unmap
net/sxe2: support queue setup and control
net/sxe2: add data path for Rx and Tx
.mailmap | 1 +
doc/guides/nics/features/sxe2.ini | 11 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/sxe2.rst | 23 +
doc/guides/rel_notes/release_26_07.rst | 3 +
drivers/common/sxe2/meson.build | 15 +
drivers/common/sxe2/sxe2_common.c | 684 +++++++++++++++
drivers/common/sxe2/sxe2_common.h | 86 ++
drivers/common/sxe2/sxe2_common_log.c | 75 ++
drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++
drivers/common/sxe2/sxe2_errno.h | 113 +++
drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++
drivers/common/sxe2/sxe2_internal_ver.h | 33 +
drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++
drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++
drivers/common/sxe2/sxe2_osal.h | 584 ++++++++++++
drivers/common/sxe2/sxe2_type.h | 65 ++
drivers/meson.build | 1 +
drivers/net/meson.build | 1 +
drivers/net/sxe2/meson.build | 25 +
drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++
drivers/net/sxe2/sxe2_cmd_chnl.h | 33 +
drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++
drivers/net/sxe2/sxe2_ethdev.c | 974 +++++++++++++++++++++
drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++
drivers/net/sxe2/sxe2_irq.h | 49 ++
drivers/net/sxe2/sxe2_queue.c | 39 +
drivers/net/sxe2/sxe2_queue.h | 227 +++++
drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++
drivers/net/sxe2/sxe2_rx.h | 34 +
drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++
drivers/net/sxe2/sxe2_tx.h | 32 +
drivers/net/sxe2/sxe2_txrx.c | 249 ++++++
drivers/net/sxe2/sxe2_txrx.h | 21 +
drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.c | 815 +++++++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.h | 16 +
drivers/net/sxe2/sxe2_vsi.c | 211 +++++
drivers/net/sxe2/sxe2_vsi.h | 205 +++++
40 files changed, 8831 insertions(+)
create mode 100644 doc/guides/nics/features/sxe2.ini
create mode 100644 doc/guides/nics/sxe2.rst
create mode 100644 drivers/common/sxe2/meson.build
create mode 100644 drivers/common/sxe2/sxe2_common.c
create mode 100644 drivers/common/sxe2/sxe2_common.h
create mode 100644 drivers/common/sxe2/sxe2_common_log.c
create mode 100644 drivers/common/sxe2/sxe2_common_log.h
create mode 100644 drivers/common/sxe2/sxe2_errno.h
create mode 100644 drivers/common/sxe2/sxe2_host_regs.h
create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h
create mode 100644 drivers/common/sxe2/sxe2_osal.h
create mode 100644 drivers/common/sxe2/sxe2_type.h
create mode 100644 drivers/net/sxe2/meson.build
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h
create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h
create mode 100644 drivers/net/sxe2/sxe2_ethdev.c
create mode 100644 drivers/net/sxe2/sxe2_ethdev.h
create mode 100644 drivers/net/sxe2/sxe2_irq.h
create mode 100644 drivers/net/sxe2/sxe2_queue.c
create mode 100644 drivers/net/sxe2/sxe2_queue.h
create mode 100644 drivers/net/sxe2/sxe2_rx.c
create mode 100644 drivers/net/sxe2/sxe2_rx.h
create mode 100644 drivers/net/sxe2/sxe2_tx.c
create mode 100644 drivers/net/sxe2/sxe2_tx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx.c
create mode 100644 drivers/net/sxe2/sxe2_txrx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h
create mode 100644 drivers/net/sxe2/sxe2_vsi.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.h
--
2.47.3
^ permalink raw reply [flat|nested] 143+ messages in thread* [PATCH v1 1/9] mailmap: add Jie Liu 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 2/9] doc: add sxe2 guide and release notes liujie5 ` (7 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 0e0d83e1c6..a6c3319dec 100644 --- a/.mailmap +++ b/.mailmap @@ -738,6 +738,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 2/9] doc: add sxe2 guide and release notes 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 2026-04-30 7:01 ` [PATCH v1 1/9] mailmap: add Jie Liu liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 3/9] drivers: add sxe2 basic structures liujie5 ` (6 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 3 +++ 4 files changed, 38 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..93fb0072a9 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 3/9] drivers: add sxe2 basic structures 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 2026-04-30 7:01 ` [PATCH v1 1/9] mailmap: add Jie Liu liujie5 2026-04-30 7:01 ` [PATCH v1 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 4/9] common/sxe2: add base driver skeleton liujie5 ` (5 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 4/9] common/sxe2: add base driver skeleton 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 5/9] drivers: add base driver probe skeleton liujie5 ` (4 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 5/9] drivers: add base driver probe skeleton 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 6/9] drivers: support PCI BAR mapping liujie5 ` (3 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 6/9] drivers: support PCI BAR mapping 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (2 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 8/9] net/sxe2: support queue setup and control liujie5 2026-04-30 7:01 ` [PATCH v1 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 8/9] net/sxe2: support queue setup and control 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 7:01 ` [PATCH v1 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v1 9/9] net/sxe2: add data path for Rx and Tx 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-04-30 7:01 ` [PATCH v1 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-04-30 7:01 ` liujie5 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-04-30 7:01 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 1 + drivers/net/sxe2/sxe2_ethdev.c | 6 + drivers/net/sxe2/sxe2_txrx.c | 249 +++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 815 ++++++++++++++++++++++++++++++ 5 files changed, 1092 insertions(+) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..761d624a88 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,7 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..856da2c296 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -760,6 +764,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..f0a8c9167e --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,815 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("Tx cleanup: desc[%u] is not done.port_id=%u " + "queue_id=%u val=0x%" PRIx64 "", clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} + +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); + } +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-04-30 7:01 ` [PATCH v1 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 1/9] mailmap: add Jie Liu liujie5 ` (8 more replies) 0 siblings, 9 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch set implements core functionality for the SXE PMD, which is a Linkdata sxe2 ethernet driver. V1: - Addressed AI comments Jie Liu (9): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control net/sxe2: add data path for Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 3 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 684 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 26 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 974 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 40 files changed, 8688 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v2 1/9] mailmap: add Jie Liu 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 2/9] doc: add sxe2 guide and release notes liujie5 ` (7 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 0e0d83e1c6..a6c3319dec 100644 --- a/.mailmap +++ b/.mailmap @@ -738,6 +738,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 2/9] doc: add sxe2 guide and release notes 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 9:22 ` [PATCH v2 1/9] mailmap: add Jie Liu liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 3/9] drivers: add sxe2 basic structures liujie5 ` (6 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 3 +++ 4 files changed, 38 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..93fb0072a9 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 3/9] drivers: add sxe2 basic structures 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 9:22 ` [PATCH v2 1/9] mailmap: add Jie Liu liujie5 2026-04-30 9:22 ` [PATCH v2 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 4/9] common/sxe2: add base driver skeleton liujie5 ` (5 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 4/9] common/sxe2: add base driver skeleton 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (2 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 5/9] drivers: add base driver probe skeleton liujie5 ` (4 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 5/9] drivers: add base driver probe skeleton 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (3 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 6/9] drivers: support PCI BAR mapping liujie5 ` (3 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 6/9] drivers: support PCI BAR mapping 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (4 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (2 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (5 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 8/9] net/sxe2: support queue setup and control liujie5 2026-04-30 9:22 ` [PATCH v2 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 8/9] net/sxe2: support queue setup and control 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (6 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 9:22 ` [PATCH v2 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v2 9/9] net/sxe2: add data path for Rx and Tx 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (7 preceding siblings ...) 2026-04-30 9:22 ` [PATCH v2 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-04-30 9:22 ` liujie5 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-04-30 9:22 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 6 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 10 files changed, 1071 insertions(+), 122 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..856da2c296 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -760,6 +764,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-04-30 9:22 ` [PATCH v2 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 1/9] mailmap: add Jie Liu liujie5 ` (10 more replies) 0 siblings, 11 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch set implements core functionality for the SXE PMD, which is a Linkdata sxe2 ethernet driver. V3: Addressed AI comments Jie Liu (9): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control net/sxe2: add data path for Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 3 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 684 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 26 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 975 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 40 files changed, 8689 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v3 1/9] mailmap: add Jie Liu 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 2/9] doc: add sxe2 guide and release notes liujie5 ` (9 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 0e0d83e1c6..a6c3319dec 100644 --- a/.mailmap +++ b/.mailmap @@ -738,6 +738,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 2/9] doc: add sxe2 guide and release notes 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 10:18 ` [PATCH v3 1/9] mailmap: add Jie Liu liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 3/9] drivers: add sxe2 basic structures liujie5 ` (8 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 3 +++ 4 files changed, 38 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..93fb0072a9 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 3/9] drivers: add sxe2 basic structures 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 10:18 ` [PATCH v3 1/9] mailmap: add Jie Liu liujie5 2026-04-30 10:18 ` [PATCH v3 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 4/9] common/sxe2: add base driver skeleton liujie5 ` (7 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 4/9] common/sxe2: add base driver skeleton 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (2 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 5/9] drivers: add base driver probe skeleton liujie5 ` (6 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 5/9] drivers: add base driver probe skeleton 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (3 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 6/9] drivers: support PCI BAR mapping liujie5 ` (5 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 6/9] drivers: support PCI BAR mapping 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (4 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (4 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (5 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 8/9] net/sxe2: support queue setup and control liujie5 ` (3 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 8/9] net/sxe2: support queue setup and control 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (6 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-04-30 10:18 ` liujie5 2026-04-30 10:18 ` [PATCH v3 9/9] net/sxe2: add data path for Rx and Tx liujie5 ` (2 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v3 9/9] net/sxe2: add data path for Rx and Tx 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (7 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-04-30 10:18 ` liujie5 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 16:21 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver Stephen Hemminger 2026-04-30 17:02 ` Stephen Hemminger 10 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-04-30 10:18 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 10 files changed, 1076 insertions(+), 126 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-04-30 10:18 ` [PATCH v3 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 1/9] mailmap: add Jie Liu liujie5 ` (8 more replies) 0 siblings, 9 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch set implements core functionality for the SXE PMD driver. V4: - Addressed AI comments Jie Liu (9): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control net/sxe2: add data path for Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 3 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 26 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 975 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 40 files changed, 8688 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v4 1/9] mailmap: add Jie Liu 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 2/9] doc: add sxe2 guide and release notes liujie5 ` (7 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 0e0d83e1c6..a6c3319dec 100644 --- a/.mailmap +++ b/.mailmap @@ -738,6 +738,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 2/9] doc: add sxe2 guide and release notes 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 1:59 ` [PATCH v4 1/9] mailmap: add Jie Liu liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 3/9] drivers: add sxe2 basic structures liujie5 ` (6 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 3 +++ 4 files changed, 38 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..93fb0072a9 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 3/9] drivers: add sxe2 basic structures 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 1:59 ` [PATCH v4 1/9] mailmap: add Jie Liu liujie5 2026-05-01 1:59 ` [PATCH v4 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 3:05 ` Stephen Hemminger 2026-05-01 1:59 ` [PATCH v4 4/9] common/sxe2: add base driver skeleton liujie5 ` (5 subsequent siblings) 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* Re: [PATCH v4 3/9] drivers: add sxe2 basic structures 2026-05-01 1:59 ` [PATCH v4 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-05-01 3:05 ` Stephen Hemminger 0 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-05-01 3:05 UTC (permalink / raw) To: liujie5; +Cc: dev On Fri, 1 May 2026 09:59:18 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > This patch adds the base infrastructure for the sxe2 common > library. It includes the mandatory OS abstraction layer (OSAL), > common structure definitions, error codes, and the logging > system implementation. > > Specifically, this commit: > - Implements the logging stream management using RTE_LOG_LINE. > - Defines device-specific error codes and status registers. > - Adds the initial meson build configuration for the common library. > > Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> > --- NAK. NAK. DPDK drivers must not reinvent their own logging, and they absolutely must not hijack the log stream that is shared with the application and the rest of DPDK. This patch does both. Concrete problems in what you posted: 1. PMD_LOG_NOTICE/WARN/ERR/CRIT/ALERT/EMERG call rte_openlog_stream(g_sxe2_common_log_fp) before logging and rte_openlog_stream(NULL) after. rte_openlog_stream() sets the *global* DPDK log stream for the entire application. A driver has no business deciding where the application's logs go. As written, this will silently redirect every other PMD's, every library's, and the application's own output any time sxe2 emits a message. 2. Those same macros emit the message twice -- once before the stream is swapped and once after -- so every NOTICE/WARN/ERR comes out duplicated. 3. Drivers must not open files. fopen("/var/log/sxe2pmd.log.<ts>", "w+") from inside a PMD is a security problem on every level: - It runs with whatever privileges the application has, which for DPDK is typically root or CAP_*-loaded. Creating files in /var/log under those privileges is a classic symlink / TOCTOU attack surface. - The path is attacker-influenceable in the timestamp component and is not created with O_CREAT|O_EXCL, no mode argument, no directory fd, none of the hardening you would expect. - Log content is written without any escaping; anything an attacker can get into a log message ends up in a file the operator will later cat or grep. - It bypasses whatever logging policy the operator, distro, systemd unit, container runtime, SELinux/AppArmor profile, or application has configured. A driver silently writing to /var/log is exactly the kind of thing those policies exist to prevent. - It doesn't work for non-root users, in unprivileged containers, on read-only rootfs systems, or on Windows. Drivers log through rte_log. The application decides where those logs go. Full stop. 4. RTE_LOG_REGISTER_SUFFIX(..., com, DEBUG) defaults the log type to DEBUG. The default level is the application's choice, not the driver's. 5. SXE2_DPDK_DEBUG is unconditionally defined in drivers/common/sxe2/meson.build, so the "debug" path with the file hijacking is always on. There is no off switch. 6. SXE2_PMD_LOG adds a thread id, basename, line number, and function name to every line. If those are useful, they are useful for every driver -- not just yours. General principles, which apply to every driver: - Drivers do not reinvent logging. Use RTE_LOG / RTE_LOG_LINE and a log type registered for the driver. That is it. - Drivers do not open files, and do not change logging behavior that is shared with the application or with other drivers. Both have security implications well beyond this driver. - If something is genuinely missing from the common logging infrastructure -- per-driver log files, richer prefixes, thread ids, structured fields, whatever -- propose it as a change to lib/log so every driver and every application benefits, and so it gets reviewed for security by people who do that for a living. Do not bury it inside one driver. - "It works for our embedded use case" is not an argument. DPDK runs on Linux, FreeBSD, and Windows; on x86, Arm, POWER, and RISC-V; in bare metal, VMs, and containers; as root and as unprivileged users. A driver has to behave reasonably in all of those. - Once we make an exception for one driver, every subsystem will expect one. That is not happening. Please drop sxe2_common_log.c and sxe2_common_log.h entirely. Use the existing RTE_LOG_* macros directly against a log type registered for your driver, and respect the level and stream that the application has configured. While you're at it, the same "this driver is a special snowflake" pattern shows up elsewhere in this series and gets the same answer: - sxe2_type.h redefines u8/u16/u32/u64/s8/.../__le16/__be32 instead of using uintN_t and rte_beN_t. - sxe2_errno.h reinvents errno values that already exist. - sxe2_osal.h re-implements bitmap operations, allocation wrappers, and an OS abstraction layer that DPDK already provides. This is the cost of being part of a shared community project across many platforms and architectures. Please rework on that basis before reposting. Stephen PS: AI revised this text, my initial version was probably not safe for public consumption. ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v4 4/9] common/sxe2: add base driver skeleton 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (2 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 5/9] drivers: add base driver probe skeleton liujie5 ` (4 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 5/9] drivers: add base driver probe skeleton 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (3 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 6/9] drivers: support PCI BAR mapping liujie5 ` (3 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 6/9] drivers: support PCI BAR mapping 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (4 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (2 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (5 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 8/9] net/sxe2: support queue setup and control liujie5 2026-05-01 1:59 ` [PATCH v4 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 8/9] net/sxe2: support queue setup and control 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (6 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 1:59 ` [PATCH v4 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v4 9/9] net/sxe2: add data path for Rx and Tx 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (7 preceding siblings ...) 2026-05-01 1:59 ` [PATCH v4 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-05-01 1:59 ` liujie5 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-01 1:59 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-05-01 1:59 ` [PATCH v4 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 1/9] mailmap: add Jie Liu liujie5 ` (8 more replies) 0 siblings, 9 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V5: - Addressed AI comments Jie Liu (9): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control net/sxe2: add data path for Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 3 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 26 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 975 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 40 files changed, 8688 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v5 1/9] mailmap: add Jie Liu 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 2/9] doc: add sxe2 guide and release notes liujie5 ` (7 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add Jie Liu's email to .mailmap file to ensure consistent author attribution across all commits. This helps maintain clean git history and ensures correct author information in release notes and changelogs. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 0e0d83e1c6..a6c3319dec 100644 --- a/.mailmap +++ b/.mailmap @@ -738,6 +738,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 2/9] doc: add sxe2 guide and release notes 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 3:33 ` [PATCH v5 1/9] mailmap: add Jie Liu liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 3/9] drivers: add sxe2 basic structures liujie5 ` (6 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 3 +++ 4 files changed, 38 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..93fb0072a9 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 3/9] drivers: add sxe2 basic structures 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 3:33 ` [PATCH v5 1/9] mailmap: add Jie Liu liujie5 2026-05-01 3:33 ` [PATCH v5 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 14:46 ` Stephen Hemminger 2026-05-01 3:33 ` [PATCH v5 4/9] common/sxe2: add base driver skeleton liujie5 ` (5 subsequent siblings) 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* Re: [PATCH v5 3/9] drivers: add sxe2 basic structures 2026-05-01 3:33 ` [PATCH v5 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-05-01 14:46 ` Stephen Hemminger 0 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-05-01 14:46 UTC (permalink / raw) To: liujie5; +Cc: dev On Fri, 1 May 2026 11:33:53 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > This patch adds the base infrastructure for the sxe2 common > library. It includes the mandatory OS abstraction layer (OSAL), > common structure definitions, error codes, and the logging > system implementation. > > Specifically, this commit: > - Implements the logging stream management using RTE_LOG_LINE. > - Defines device-specific error codes and status registers. > - Adds the initial meson build configuration for the common library. > > Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> > --- You ignored the feed back about drivers writing files and changing logging. ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v5 4/9] common/sxe2: add base driver skeleton 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (2 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 5/9] drivers: add base driver probe skeleton liujie5 ` (4 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 5/9] drivers: add base driver probe skeleton 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (3 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 6/9] drivers: support PCI BAR mapping liujie5 ` (3 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 6/9] drivers: support PCI BAR mapping 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (4 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (2 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (5 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 8/9] net/sxe2: support queue setup and control liujie5 2026-05-01 3:33 ` [PATCH v5 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 8/9] net/sxe2: support queue setup and control 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (6 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-01 3:33 ` [PATCH v5 9/9] net/sxe2: add data path for Rx and Tx liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v5 9/9] net/sxe2: add data path for Rx and Tx 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (7 preceding siblings ...) 2026-05-01 3:33 ` [PATCH v5 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-05-01 3:33 ` liujie5 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-01 3:33 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 00/10] Add sxe2 driver 2026-05-01 3:33 ` [PATCH v5 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 01/10] mailmap: add Jie Liu liujie5 ` (9 more replies) 0 siblings, 10 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V6: - Addressed AI comments Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 35 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 971 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 368 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 966 ++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 44 files changed, 10042 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v6 01/10] mailmap: add Jie Liu 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 02/10] doc: add sxe2 guide and release notes liujie5 ` (8 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 02/10] doc: add sxe2 guide and release notes 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 2026-05-06 2:12 ` [PATCH v6 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 03/10] drivers: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 03/10] drivers: add sxe2 basic structures 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 2026-05-06 2:12 ` [PATCH v6 01/10] mailmap: add Jie Liu liujie5 2026-05-06 2:12 ` [PATCH v6 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 04/10] common/sxe2: add base driver skeleton liujie5 ` (6 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 04/10] common/sxe2: add base driver skeleton 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 03/10] drivers: add sxe2 basic structures liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 05/10] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 05/10] drivers: add base driver probe skeleton 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 04/10] common/sxe2: add base driver skeleton liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 06/10] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 06/10] drivers: support PCI BAR mapping 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 08/10] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 08/10] net/sxe2: support queue setup and control 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 2:12 ` [PATCH v6 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 09/10] drivers: add data path for Rx and Tx 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 2:12 ` [PATCH v6 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v6 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 ` (8 preceding siblings ...) 2026-05-06 2:12 ` [PATCH v6 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-06 2:12 ` liujie5 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 9 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-06 2:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 9 + drivers/net/sxe2/sxe2_ethdev.c | 8 +- drivers/net/sxe2/sxe2_txrx.c | 227 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 12 +- drivers/net/sxe2/sxe2_txrx_poll.c | 184 ++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++++++++++++++ 10 files changed, 1420 insertions(+), 67 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 728a88b6a1..b9618f2964 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -12,6 +12,14 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') + + if is_windows and cc.get_id() != 'clang' + cflags += ['-fno-asynchronous-unwind-tables'] + endif +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -21,6 +29,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 68d7e36cf1..7eaa1722d0 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -312,6 +306,8 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index 3e88ab5241..8793a61d13 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,119 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); - - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { +#ifdef CC_AVX512_SUPPORT + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX512); +#else + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); +#endif + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX2); + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK))) + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_SSE); +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +265,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,59 +275,93 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { - struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); +struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; - +#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) + s32 ret; + u32 vec_flags; +#endif PMD_INIT_FUNC_TRACE(); - + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_AVX2); + } + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_SSE); + } + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) { + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; + goto l_end; + } +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - dev->tx_descriptor_status = sxe2_tx_desciptor_status; dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; dev->tx_pkt_burst = sxe2_tx_pkts; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index cd9ebfa32f..7bb852789c 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,16 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); - +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c index 55bea8b74c..41f7288318 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.c +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -19,6 +19,66 @@ #include "sxe2_common_log.h" #include "sxe2_errno.h" +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) { s32 ret = SXE2_SUCCESS; @@ -330,6 +390,130 @@ u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) return tx_num; } +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); + SXE2_TX_STATS_CNT(tx_queue, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + static inline void sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) { diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..1e44d510cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,188 @@ +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + goto l_end; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + i = 0; + } + for (; i < txq->next_use; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } +l_end: + return; +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..1f5effd203 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,549 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_RX_INFO("Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), + dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), + dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i + 2])); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[i + 2]), pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_RX_DEBUG("port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + SXE2_RX_STATS_CNT(rx_queue, rx_pkts_num, done_num); + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 00/10] Add Linkdata sxe2 driver 2026-05-06 2:12 ` [PATCH v6 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 01/10] doc: add sxe2 guide and release notes liujie5 ` (8 more replies) 0 siblings, 9 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V7: - Addressed AI comments V6: - Addressed AI comments Jie Liu (10): doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx net/sxe2: add AVX2 vector data path for Rx and Tx doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 46 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 971 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 384 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 966 ++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++ drivers/net/sxe2/sxe2_txrx_vec.h | 78 ++ drivers/net/sxe2/sxe2_txrx_vec_avx2.c | 749 ++++++++++++++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 44 files changed, 10822 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_avx2.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v7 01/10] doc: add sxe2 guide and release notes 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 02/10] drivers: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 02/10] drivers: add sxe2 basic structures 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 3:31 ` [PATCH v7 01/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 03/10] common/sxe2: add base driver skeleton liujie5 ` (6 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 03/10] common/sxe2: add base driver skeleton 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 3:31 ` [PATCH v7 01/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 3:31 ` [PATCH v7 02/10] drivers: add sxe2 basic structures liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 04/10] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 04/10] drivers: add base driver probe skeleton 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 03/10] common/sxe2: add base driver skeleton liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 05/10] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 05/10] drivers: support PCI BAR mapping 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 04/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 06/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 06/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 05/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 07/10] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 07/10] net/sxe2: support queue setup and control 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 06/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 08/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 3:31 ` [PATCH v7 09/10] net/sxe2: add vectorized " liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 08/10] drivers: add data path for Rx and Tx 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 07/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 3:31 ` [PATCH v7 09/10] net/sxe2: add vectorized " liujie5 8 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v7 09/10] net/sxe2: add vectorized Rx and Tx 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-06 3:31 ` [PATCH v7 08/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-06 3:31 ` liujie5 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 8 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-06 3:31 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 9 + drivers/net/sxe2/sxe2_ethdev.c | 8 +- drivers/net/sxe2/sxe2_txrx.c | 227 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 12 +- drivers/net/sxe2/sxe2_txrx_poll.c | 184 ++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++++++++++++++ 10 files changed, 1420 insertions(+), 67 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 728a88b6a1..b9618f2964 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -12,6 +12,14 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') + + if is_windows and cc.get_id() != 'clang' + cflags += ['-fno-asynchronous-unwind-tables'] + endif +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -21,6 +29,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 68d7e36cf1..7eaa1722d0 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -312,6 +306,8 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index 3e88ab5241..8793a61d13 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,119 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); - - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { +#ifdef CC_AVX512_SUPPORT + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX512); +#else + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); +#endif + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX2); + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK))) + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_SSE); +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +265,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,59 +275,93 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { - struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); +struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; - +#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) + s32 ret; + u32 vec_flags; +#endif PMD_INIT_FUNC_TRACE(); - + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_AVX2); + } + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_SSE); + } + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) { + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; + goto l_end; + } +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - dev->tx_descriptor_status = sxe2_tx_desciptor_status; dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; dev->tx_pkt_burst = sxe2_tx_pkts; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index cd9ebfa32f..7bb852789c 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,16 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); - +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c index 55bea8b74c..41f7288318 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.c +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -19,6 +19,66 @@ #include "sxe2_common_log.h" #include "sxe2_errno.h" +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) { s32 ret = SXE2_SUCCESS; @@ -330,6 +390,130 @@ u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) return tx_num; } +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); + SXE2_TX_STATS_CNT(tx_queue, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + static inline void sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) { diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..1e44d510cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,188 @@ +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + goto l_end; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + i = 0; + } + for (; i < txq->next_use; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } +l_end: + return; +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..1f5effd203 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,549 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_RX_INFO("Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), + dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), + dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i + 2])); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[i + 2]), pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_RX_DEBUG("port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + SXE2_RX_STATS_CNT(rx_queue, rx_pkts_num, done_num); + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 00/10] Add Linkdata sxe2 driver 2026-05-06 3:31 ` [PATCH v7 09/10] net/sxe2: add vectorized " liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 01/10] mailmap: add Jie Liu liujie5 ` (9 more replies) 0 siblings, 10 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V10: - Addressed AI comments Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 35 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 971 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 369 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 966 ++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 44 files changed, 10041 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v8 01/10] mailmap: add Jie Liu 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 02/10] doc: add sxe2 guide and release notes liujie5 ` (8 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 02/10] doc: add sxe2 guide and release notes 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 6:12 ` [PATCH v8 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 03/10] drivers: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 03/10] drivers: add sxe2 basic structures 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 6:12 ` [PATCH v8 01/10] mailmap: add Jie Liu liujie5 2026-05-06 6:12 ` [PATCH v8 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 04/10] common/sxe2: add base driver skeleton liujie5 ` (6 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 13 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1959 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..7d448629d5 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 04/10] common/sxe2: add base driver skeleton 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 03/10] drivers: add sxe2 basic structures liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 05/10] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 7d448629d5..3626fb1119 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -9,5 +9,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 05/10] drivers: add base driver probe skeleton 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 04/10] common/sxe2: add base driver skeleton liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 06/10] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 22 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3025 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..160a0de8ed --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 06/10] drivers: support PCI BAR mapping 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 08/10] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 08/10] net/sxe2: support queue setup and control 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 6:12 ` [PATCH v8 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 160a0de8ed..803e47c1aa 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -17,6 +17,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 09/10] drivers: add data path for Rx and Tx 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 6:12 ` [PATCH v8 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 803e47c1aa..728a88b6a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -19,6 +19,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v8 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 ` (8 preceding siblings ...) 2026-05-06 6:12 ` [PATCH v8 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-06 6:12 ` liujie5 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 9 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-06 6:12 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 9 + drivers/net/sxe2/sxe2_ethdev.c | 8 +- drivers/net/sxe2/sxe2_txrx.c | 224 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 12 +- drivers/net/sxe2/sxe2_txrx_poll.c | 184 ++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++++++++++++++ 10 files changed, 1417 insertions(+), 65 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 728a88b6a1..b9618f2964 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -12,6 +12,14 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') + + if is_windows and cc.get_id() != 'clang' + cflags += ['-fno-asynchronous-unwind-tables'] + endif +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -21,6 +29,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 68d7e36cf1..7eaa1722d0 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -312,6 +306,8 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index 3e88ab5241..b6d9520841 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,119 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); - - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { +#ifdef CC_AVX512_SUPPORT + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX512); +#else + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); +#endif + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX2); + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK))) + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_SSE); +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +265,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,59 +275,94 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; + s32 ret; + u32 vec_flags; PMD_INIT_FUNC_TRACE(); - + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { +#ifdef RTE_ARCH_X86 + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_AVX2); + } + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_SSE); + } +#endif + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) { + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; + goto l_end; + } +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - dev->tx_descriptor_status = sxe2_tx_desciptor_status; dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; dev->tx_pkt_burst = sxe2_tx_pkts; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index cd9ebfa32f..7bb852789c 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,16 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); - +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c index 55bea8b74c..41f7288318 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.c +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -19,6 +19,66 @@ #include "sxe2_common_log.h" #include "sxe2_errno.h" +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) { s32 ret = SXE2_SUCCESS; @@ -330,6 +390,130 @@ u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) return tx_num; } +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); + SXE2_TX_STATS_CNT(tx_queue, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + static inline void sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) { diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..1e44d510cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,188 @@ +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + goto l_end; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + i = 0; + } + for (; i < txq->next_use; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } +l_end: + return; +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..9bc291577b --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,547 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_RX_INFO("Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_RX_DEBUG("port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + SXE2_RX_STATS_CNT(rx_queue, rx_pkts_num, done_num); + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 00/10] Add Linkdata sxe2 driver 2026-05-06 6:12 ` [PATCH v8 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 01/10] mailmap: add Jie Liu liujie5 ` (9 more replies) 0 siblings, 10 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V9: - Addressed AI comments Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 21 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 43 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 971 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 367 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 966 ++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 44 files changed, 10052 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v9 01/10] mailmap: add Jie Liu 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 02/10] doc: add sxe2 guide and release notes liujie5 ` (8 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 02/10] doc: add sxe2 guide and release notes 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 9:56 ` [PATCH v9 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 03/10] drivers: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 03/10] drivers: add sxe2 basic structures 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 9:56 ` [PATCH v9 01/10] mailmap: add Jie Liu liujie5 2026-05-06 9:56 ` [PATCH v9 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 04/10] common/sxe2: add base driver skeleton liujie5 ` (6 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 19 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1965 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..09ce556f70 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 04/10] common/sxe2: add base driver skeleton 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-06 9:56 ` [PATCH v9 03/10] drivers: add sxe2 basic structures liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 05/10] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 09ce556f70..b4ad4ed58d 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -15,5 +15,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 05/10] drivers: add base driver probe skeleton 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-06 9:56 ` [PATCH v9 04/10] common/sxe2: add base driver skeleton liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 06/10] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 28 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3031 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..98d0b7fc6d --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,28 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 06/10] drivers: support PCI BAR mapping 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-06 9:56 ` [PATCH v9 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:56 ` [PATCH v9 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-06 9:56 ` [PATCH v9 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-06 9:56 ` liujie5 2026-05-06 9:57 ` [PATCH v9 08/10] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:56 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 08/10] net/sxe2: support queue setup and control 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-06 9:56 ` [PATCH v9 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-06 9:57 ` liujie5 2026-05-06 9:57 ` [PATCH v9 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 9:57 ` [PATCH v9 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:57 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 98d0b7fc6d..61467a4e31 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -23,6 +23,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 09/10] drivers: add data path for Rx and Tx 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-06 9:57 ` [PATCH v9 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-06 9:57 ` liujie5 2026-05-06 9:57 ` [PATCH v9 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 9:57 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 61467a4e31..b331451160 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -25,6 +25,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v9 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 ` (8 preceding siblings ...) 2026-05-06 9:57 ` [PATCH v9 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-06 9:57 ` liujie5 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 9 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-06 9:57 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 11 + drivers/net/sxe2/sxe2_ethdev.c | 8 +- drivers/net/sxe2/sxe2_ethdev.h | 1 - drivers/net/sxe2/sxe2_txrx.c | 222 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 12 +- drivers/net/sxe2/sxe2_txrx_poll.c | 186 +++++++- drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 188 ++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++++++++++++++ 11 files changed, 1418 insertions(+), 67 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index b331451160..0975366c10 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -18,6 +18,16 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +includes += include_directories('../../common/sxe2') + +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') + + if is_windows and cc.get_id() != 'clang' + cflags += ['-fno-asynchronous-unwind-tables'] + endif +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -27,6 +37,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 68d7e36cf1..7eaa1722d0 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -312,6 +306,8 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 7999e4f331..0881d57d77 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -11,7 +11,6 @@ #include <rte_tm_driver.h> #include <rte_io.h> -#include "sxe2_common.h" #include "sxe2_errno.h" #include "sxe2_type.h" #include "sxe2_vsi.h" diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index 3e88ab5241..348f420bb1 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,119 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); - - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { +#ifdef CC_AVX512_SUPPORT + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX512); +#else + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); +#endif + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX2); + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK))) + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_SSE); +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +265,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,59 +275,92 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; + s32 ret; + u32 vec_flags; PMD_INIT_FUNC_TRACE(); - + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { +#ifdef RTE_ARCH_X86 + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_AVX2); + } + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_SSE); + } +#endif + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - dev->tx_descriptor_status = sxe2_tx_desciptor_status; dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; dev->tx_pkt_burst = sxe2_tx_pkts; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index cd9ebfa32f..7bb852789c 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,16 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); - +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c index 55bea8b74c..37ce4d8e17 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.c +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -19,6 +19,66 @@ #include "sxe2_common_log.h" #include "sxe2_errno.h" +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) { s32 ret = SXE2_SUCCESS; @@ -330,6 +390,130 @@ u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) return tx_num; } +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); + SXE2_TX_STATS_CNT(tx_queue, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + static inline void sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) { @@ -585,7 +769,7 @@ u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 struct rte_mbuf *cur_mbuf; struct rte_mbuf *cur_mbuf_pay; struct rte_mbuf *new_mbuf; - struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *new_mbuf_pay = NULL; struct rte_mbuf *first_seg; struct rte_mbuf *last_seg; u64 qword1; diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..1e44d510cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,188 @@ +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + goto l_end; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + i = 0; + } + for (; i < txq->next_use; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } +l_end: + return; +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..9bc291577b --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,547 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_RX_INFO("Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_RX_DEBUG("port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + SXE2_RX_STATS_CNT(rx_queue, rx_pkts_num, done_num); + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 00/10] Add Linkdata sxe2 driver 2026-05-06 9:57 ` [PATCH v9 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 01/10] mailmap: add Jie Liu liujie5 ` (10 more replies) 0 siblings, 11 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V10: - Addressed AI comments Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 21 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 43 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 971 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 367 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 966 ++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 192 ++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 44 files changed, 10056 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v10 01/10] mailmap: add Jie Liu 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 02/10] doc: add sxe2 guide and release notes liujie5 ` (9 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 02/10] doc: add sxe2 guide and release notes 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 11:35 ` [PATCH v10 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 03/10] drivers: add sxe2 basic structures liujie5 ` (8 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 03/10] drivers: add sxe2 basic structures 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 11:35 ` [PATCH v10 01/10] mailmap: add Jie Liu liujie5 2026-05-06 11:35 ` [PATCH v10 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 04/10] common/sxe2: add base driver skeleton liujie5 ` (7 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 19 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1965 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..09ce556f70 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 04/10] common/sxe2: add base driver skeleton 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 03/10] drivers: add sxe2 basic structures liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 05/10] drivers: add base driver probe skeleton liujie5 ` (6 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 09ce556f70..b4ad4ed58d 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -15,5 +15,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 05/10] drivers: add base driver probe skeleton 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 04/10] common/sxe2: add base driver skeleton liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 06/10] drivers: support PCI BAR mapping liujie5 ` (5 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 28 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3031 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..98d0b7fc6d --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,28 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 06/10] drivers: support PCI BAR mapping 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (4 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 08/10] net/sxe2: support queue setup and control liujie5 ` (3 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 08/10] net/sxe2: support queue setup and control 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 09/10] drivers: add data path for Rx and Tx liujie5 ` (2 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 98d0b7fc6d..61467a4e31 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -23,6 +23,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 09/10] drivers: add data path for Rx and Tx 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-06 11:35 ` [PATCH v10 10/10] net/sxe2: add vectorized " liujie5 2026-05-07 0:23 ` [PATCH v10 00/10] Add Linkdata sxe2 driver Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 61467a4e31..b331451160 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -25,6 +25,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v10 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (8 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-06 11:35 ` liujie5 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-07 0:23 ` [PATCH v10 00/10] Add Linkdata sxe2 driver Stephen Hemminger 10 siblings, 2 replies; 143+ messages in thread From: liujie5 @ 2026-05-06 11:35 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 11 + drivers/net/sxe2/sxe2_ethdev.c | 8 +- drivers/net/sxe2/sxe2_ethdev.h | 1 - drivers/net/sxe2/sxe2_txrx.c | 222 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 12 +- drivers/net/sxe2/sxe2_txrx_poll.c | 186 +++++++- drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 192 +++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 547 ++++++++++++++++++++++++ 11 files changed, 1422 insertions(+), 67 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index b331451160..0975366c10 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -18,6 +18,16 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +includes += include_directories('../../common/sxe2') + +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') + + if is_windows and cc.get_id() != 'clang' + cflags += ['-fno-asynchronous-unwind-tables'] + endif +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -27,6 +37,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 68d7e36cf1..7eaa1722d0 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -312,6 +306,8 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 7999e4f331..0881d57d77 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -11,7 +11,6 @@ #include <rte_tm_driver.h> #include <rte_io.h> -#include "sxe2_common.h" #include "sxe2_errno.h" #include "sxe2_type.h" #include "sxe2_vsi.h" diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index 3e88ab5241..348f420bb1 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,119 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); - - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { +#ifdef CC_AVX512_SUPPORT + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX512); +#else + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); +#endif + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_AVX2); + } + if ((0 == (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK))) + tx_mode_flags |= (vec_flags | SXE2_TX_MODE_VEC_SSE); +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +265,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,59 +275,92 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; + s32 ret; + u32 vec_flags; PMD_INIT_FUNC_TRACE(); - + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { +#ifdef RTE_ARCH_X86 + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_AVX2); + } + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags |= (vec_flags | SXE2_RX_MODE_VEC_SSE); + } +#endif + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - dev->tx_descriptor_status = sxe2_tx_desciptor_status; dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; dev->tx_pkt_burst = sxe2_tx_pkts; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index cd9ebfa32f..7bb852789c 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,16 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); - +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c index 55bea8b74c..37ce4d8e17 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.c +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -19,6 +19,66 @@ #include "sxe2_common_log.h" #include "sxe2_errno.h" +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) { s32 ret = SXE2_SUCCESS; @@ -330,6 +390,130 @@ u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) return tx_num; } +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); + SXE2_TX_STATS_CNT(tx_queue, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + static inline void sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) { @@ -585,7 +769,7 @@ u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 struct rte_mbuf *cur_mbuf; struct rte_mbuf *cur_mbuf_pay; struct rte_mbuf *new_mbuf; - struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *new_mbuf_pay = NULL; struct rte_mbuf *first_seg; struct rte_mbuf *last_seg; u64 qword1; diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..32823f3740 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,192 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + goto l_end; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + i = 0; + } + for (; i < txq->next_use; ++i) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } +l_end: + return; +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..9bc291577b --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,547 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_TX_DEBUG("Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_RX_INFO("Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_RX_DEBUG("port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + SXE2_RX_STATS_CNT(rx_queue, rx_pkts_num, done_num); + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 0/9] Add Linkdata sxe2 driver 2026-05-06 11:35 ` [PATCH v10 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 1/9] mailmap: add Jie Liu liujie5 ` (9 more replies) 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 1 sibling, 10 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> V11: - Addressed AI comments Jie Liu (9): mailmap: add Jie Liu doc: add sxe2 guide and release notes drivers: add sxe2 basic structures common/sxe2: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 11 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 21 + drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.c | 75 ++ drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ drivers/common/sxe2/sxe2_type.h | 64 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 32 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 975 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 + drivers/net/sxe2/sxe2_queue.h | 227 +++++ drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 40 files changed, 8701 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v11 1/9] mailmap: add Jie Liu 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 2/9] doc: add sxe2 guide and release notes liujie5 ` (8 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 2/9] doc: add sxe2 guide and release notes 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 2026-05-07 1:44 ` [PATCH v11 1/9] mailmap: add Jie Liu liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 3/9] drivers: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 11 +++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 23 +++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 ++++ 4 files changed, 39 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..cbf5a773fb --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,11 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Queue start/stop = Y +Linux = Y \ No newline at end of file diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..2f9ba91c33 --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,23 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100/200 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported + +Implementation details +---------------------- + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel +combined with hardware specifications that allow it to handle virtual memory +addresses directly ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 3/9] drivers: add sxe2 basic structures 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 2026-05-07 1:44 ` [PATCH v11 1/9] mailmap: add Jie Liu liujie5 2026-05-07 1:44 ` [PATCH v11 2/9] doc: add sxe2 guide and release notes liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 4/9] common/sxe2: add base driver skeleton liujie5 ` (6 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 19 + drivers/common/sxe2/sxe2_common_log.c | 75 +++ drivers/common/sxe2/sxe2_common_log.h | 368 ++++++++++++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 65 +++ drivers/meson.build | 1 + 9 files changed, 1965 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common_log.c create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..09ce556f70 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += [ + '-DSXE2_DPDK_DRIVER', + '-DSXE2_DPDK_DEBUG', +] + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common_log.c', +) diff --git a/drivers/common/sxe2/sxe2_common_log.c b/drivers/common/sxe2/sxe2_common_log.c new file mode 100644 index 0000000000..e2963ce762 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <eal_export.h> +#include <string.h> +#include <time.h> +#include <rte_log.h> + +#include "sxe2_common_log.h" + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_COMMON_LOG_FILE_NAME_LEN 256 +#define SXE2_COMMON_LOG_FILE_PATH "/var/log/" + +FILE *g_sxe2_common_log_fp; +s8 g_sxe2_common_log_filename[SXE2_COMMON_LOG_FILE_NAME_LEN] = {0}; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_init) +void +sxe2_common_log_stream_init(void) +{ + FILE *fp; + struct tm *td; + time_t rawtime; + u8 len; + s8 stime[40]; + + if (g_sxe2_common_log_fp) + goto l_end; + + memset(g_sxe2_common_log_filename, 0, SXE2_COMMON_LOG_FILE_NAME_LEN); + + len = snprintf(g_sxe2_common_log_filename, SXE2_COMMON_LOG_FILE_NAME_LEN, + "%ssxe2pmd.log.", SXE2_COMMON_LOG_FILE_PATH); + + time(&rawtime); + td = localtime(&rawtime); + strftime(stime, sizeof(stime), "%Y-%m-%d-%H:%M:%S", td); + + snprintf(g_sxe2_common_log_filename + len, SXE2_COMMON_LOG_FILE_NAME_LEN - len, + "%s", stime); + + fp = fopen(g_sxe2_common_log_filename, "w+"); + if (fp == NULL) { + RTE_LOG_LINE_PREFIX(ERR, SXE2_COM, "Fail to open log file:%s, errno:%d %s.", + g_sxe2_common_log_filename RTE_LOG_COMMA errno RTE_LOG_COMMA + strerror(errno)); + goto l_end; + } + g_sxe2_common_log_fp = fp; + +l_end: + return; +} +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_open) +void +sxe2_common_log_stream_open(void) +{ + rte_openlog_stream(g_sxe2_common_log_fp); +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_log_stream_close) +void +sxe2_common_log_stream_close(void) +{ + rte_openlog_stream(NULL); +} +#endif + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); +#endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..8ade49d020 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#ifndef RTE_EXEC_ENV_WINDOWS +#include <pthread.h> +#else +#include <windows.h> +#endif + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define STIME(log_time) \ + do { \ + time_t tv; \ + struct tm *td; \ + time(&tv); \ + td = localtime(&tv); \ + strftime(log_time, sizeof(log_time), "%Y-%m-%d-%H:%M:%S", td); \ + } while (0) + +#define filename_printf(x) (strrchr((x), '/') ? strrchr((x), '/') + 1 : (x)) + +#ifndef RTE_EXEC_ENV_WINDOWS +#define get_current_thread_id() ((uint64_t)pthread_self()) +#else +#define get_current_thread_id() ((uint64_t)GetCurrentThreadId()) +#endif + +#ifdef SXE2_DPDK_DEBUG + +__rte_internal +void +sxe2_common_log_stream_open(void); + +__rte_internal +void +sxe2_common_log_stream_close(void); + +__rte_internal +void +sxe2_common_log_stream_init(void); + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s(): ", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "[%" PRIu64 "]:%s:%u:%s():[port:%u]:", \ + get_current_thread_id() RTE_LOG_COMMA \ + filename_printf(__FILE__) RTE_LOG_COMMA \ + __LINE__ RTE_LOG_COMMA \ + __func__, RTE_LOG_COMMA \ + adapter->port_id, __VA_ARGS__) + + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + do { \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + do { \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + do { \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_open();\ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__); \ + sxe2_common_log_stream_close();\ + } while (0) + +#else +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#endif + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#ifdef SXE2_DPDK_DEBUG + +#define LOG_DEBUG(fmt, ...) \ + PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) + +#define LOG_INFO(fmt, ...) \ + PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) + +#define LOG_WARN(fmt, ...) \ + PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) + +#define LOG_ERROR(fmt, ...) \ + PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) + +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) + +#else +#define LOG_DEBUG(fmt, ...) +#define LOG_INFO(fmt, ...) +#define LOG_WARN(fmt, ...) +#define LOG_ERROR(fmt, ...) +#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ + PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) + +#define LOG_INFO_BDF(dev_name, fmt, ...) \ + PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) + +#define LOG_WARN_BDF(dev_name, fmt, ...) \ + PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) + +#define LOG_ERROR_BDF(dev_name, fmt, ...) \ + PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) +#endif + +#ifdef SXE2_DPDK_DEBUG +#define LOG_DEV_DEBUG(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_INFO(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_WARN(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_DEV_ERR(fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_INFO(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_WARN(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#define LOG_MSG_ERR(msglvl, fmt, ...) \ + do { \ + RTE_SET_USED(adapter); \ + LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ + } while (0) + +#else + +#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) +#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) +#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) +#endif + +#endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..a41913fdd8 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) ((ver >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..fd6823fe98 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,584 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + *a += *b; + *b = *a - *b; + *a -= *b; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +struct sxe2_lock { + rte_spinlock_t spinlock; +}; +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) (((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..56d0a11f48 --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef char s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +typedef s8 S8; +typedef s16 S16; +typedef s32 S32; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 4/9] common/sxe2: add base driver skeleton 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (2 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 3/9] drivers: add sxe2 basic structures liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 5/9] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 2 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ 6 files changed, 1071 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build index 09ce556f70..b4ad4ed58d 100644 --- a/drivers/common/sxe2/meson.build +++ b/drivers/common/sxe2/meson.build @@ -15,5 +15,7 @@ cflags += [ deps += ['bus_pci', 'net', 'eal', 'ethdev'] sources = files( + 'sxe2_common.c', 'sxe2_common_log.c', + 'sxe2_ioctl_chnl.c', ) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..dfdefb8b78 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const s8 *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const s8 *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = SXE2_ERR_INVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const s8 *key, const s8 *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = SXE2_ERR_INVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = SXE2_ERR_BUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = SXE2_ERR_NOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); +#ifdef SXE2_DPDK_DEBUG + sxe2_common_log_stream_init(); +#endif + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..f62e00e053 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const s8 *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..db09dd3126 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + + #include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = SXE2_ERR_IO; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + s8 drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd > 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 5/9] drivers: add base driver probe skeleton 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (3 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 4/9] common/sxe2: add base driver skeleton liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 6/9] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset=y, Size: 87845 bytes --] From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 28 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 633 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 227 ++++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 211 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 16 files changed, 3031 insertions(+) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index db09dd3126..e22731065d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..98d0b7fc6d --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,28 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. +#执行子目录base,并获取目标对象 + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += ['-DSXE2_DPDK_DRIVER'] +cflags += ['-DFPGA_VER_ASIC'] +if arch_subdir != 'arm' + cflags += ['-Werror'] +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..b9749b0a08 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..f2de249279 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,633 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + RTE_ETH_RX_OFFLOAD_QINQ_STRIP | +#endif + RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | + RTE_ETH_RX_OFFLOAD_TCP_LRO | + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + if (adapter->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + + dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->nb_rx_queues = dev->data->nb_rx_queues; + dev_info->nb_tx_queues = dev->data->nb_tx_queues; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = SXE2_ERR_INVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +#ifdef SXE2_DPDK_DEBUG +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, DEBUG); +#else +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..dc3a3175d1 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE * 2) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..e4cbd55faf --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,227 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_txq_stats tx_stats; + struct sxe2_txq_stats tx_stats_cur; + struct sxe2_txq_stats tx_stats_prev; +#endif + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; +#ifdef SXE2_DPDK_DEBUG + struct sxe2_rxq_stats rx_stats; + struct sxe2_rxq_stats rx_stats_cur; + struct sxe2_rxq_stats rx_stats_prev; +#endif + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +#ifdef SXE2_DPDK_DEBUG +#define SXE2_RX_STATS_CNT(rxq, name, num) \ + ((((struct sxe2_rx_queue *)(rxq))->rx_stats.name) += (num)) + +#define SXE2_TX_STATS_CNT(txq, name, num) \ + ((((struct sxe2_tx_queue *)(txq))->tx_stats.name) += (num)) +#else +#define SXE2_RX_STATS_CNT(rxq, name, num) +#define SXE2_TX_STATS_CNT(txq, name, num) +#endif + +#ifdef SXE2_DPDK_DEBUG_RXTX_LOG +#define PMD_LOG_RX_DEBUG(fmt, ...)PMD_LOG_DEBUG(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_RX_INFO(fmt, ...) PMD_LOG_INFO(RX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_DEBUG(fmt, ...) PMD_LOG_DEBUG(TX, fmt, ##__VA_ARGS__) + +#define PMD_LOG_TX_INFO(fmt, ...) PMD_LOG_INFO(TX, fmt, ##__VA_ARGS__) +#else +#define PMD_LOG_RX_DEBUG(fmt, ...) +#define PMD_LOG_RX_INFO(fmt, ...) +#define PMD_LOG_TX_DEBUG(fmt, ...) +#define PMD_LOG_TX_INFO(fmt, ...) +#endif + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..1c8dccae0b --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -SXE2_ERR_INIT_VSI_CRITICAL; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 6/9] drivers: support PCI BAR mapping 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (4 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 5/9] drivers: add base driver probe skeleton liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset=y, Size: 13761 bytes --] From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 34 +++ drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 359 insertions(+) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index e22731065d..2bd7c2b2eb 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index f2de249279..fa6304ebbc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -304,6 +320,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -371,6 +412,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = SXE2_ERR_NOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -385,6 +487,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = SXE2_ERR_FAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -425,6 +575,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = SXE2_ERR_NXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = SXE2_ERR_NOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = SXE2_ERR_NOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -439,6 +740,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index dc3a3175d1..fb7813ef80 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 7/9] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (5 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 6/9] drivers: support PCI BAR mapping liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 8/9] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 48 ++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 161 insertions(+) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index dfdefb8b78..537d4e9f6a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -466,12 +466,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = SXE2_ERR_NODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 2bd7c2b2eb..1a14d401e7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = SXE2_ERR_IO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + ret = SXE2_ERR_IO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + return SXE2_SUCCESS; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = SXE2_ERR_BADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = SXE2_ERR_IO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 8/9] net/sxe2: support queue setup and control 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (6 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 1:44 ` [PATCH v11 9/9] drivers: add data path for Rx and Tx liujie5 2026-05-07 2:40 ` [PATCH v11 0/9] Add Linkdata sxe2 driver Stephen Hemminger 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 64 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 +++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 7 files changed, 1143 insertions(+), 18 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 98d0b7fc6d..61467a4e31 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -23,6 +23,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index fa6304ebbc..c1a65f25ce 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -318,6 +302,12 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -345,6 +335,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index fb7813ef80..7999e4f331 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..00e24fc361 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == SXE2_ERR_PERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + sxe2_rx_queue_release(dev, queue_idx); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = SXE2_ERR_NO_MEMORY; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = SXE2_ERR_NO_MEMORY; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..7e4dd74a51 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = SXE2_ERR_NO_MEMORY; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = SXE2_ERR_INVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ +struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v11 9/9] drivers: add data path for Rx and Tx 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (7 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 8/9] net/sxe2: support queue setup and control liujie5 @ 2026-05-07 1:44 ` liujie5 2026-05-07 2:40 ` [PATCH v11 0/9] Add Linkdata sxe2 driver Stephen Hemminger 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-07 1:44 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 13 +- drivers/common/sxe2/sxe2_common_log.h | 105 ---- drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 20 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 15 +- drivers/net/sxe2/sxe2_txrx.c | 249 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 782 ++++++++++++++++++++++++++ 11 files changed, 1082 insertions(+), 133 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 537d4e9f6a..d2ed1460a3 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -28,7 +28,7 @@ static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); -static pthread_mutex_t sxe2_common_devices_list_lock; +static rte_spinlock_t sxe2_common_devices_list_lock; static struct rte_pci_id *sxe2_common_pci_id_table; @@ -223,9 +223,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( cdev->config.kernel_reset = false; rte_ticketlock_init(&cdev->config.lock); - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); l_end: return cdev; @@ -233,10 +233,9 @@ static struct sxe2_common_device *sxe2_common_device_alloc( static void sxe2_common_device_free(struct sxe2_common_device *cdev) { - - (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + rte_spinlock_lock(&sxe2_common_devices_list_lock); TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); - (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + rte_spinlock_unlock(&sxe2_common_devices_list_lock); rte_free(cdev); } @@ -662,7 +661,7 @@ sxe2_common_init(void) if (sxe2_commoin_inited) goto l_end; - pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + rte_spinlock_init(&sxe2_common_devices_list_lock); #ifdef SXE2_DPDK_DEBUG sxe2_common_log_stream_init(); #endif diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index 8ade49d020..14074fcc4f 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -260,109 +260,4 @@ sxe2_common_log_stream_init(void); #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") -#ifdef SXE2_DPDK_DEBUG - -#define LOG_DEBUG(fmt, ...) \ - PMD_LOG_DEBUG(DRV, fmt, ##__VA_ARGS__) - -#define LOG_INFO(fmt, ...) \ - PMD_LOG_INFO(DRV, fmt, ##__VA_ARGS__) - -#define LOG_WARN(fmt, ...) \ - PMD_LOG_WARN(DRV, fmt, ##__VA_ARGS__) - -#define LOG_ERROR(fmt, ...) \ - PMD_LOG_ERR(DRV, fmt, ##__VA_ARGS__) - -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) - -#else -#define LOG_DEBUG(fmt, ...) -#define LOG_INFO(fmt, ...) -#define LOG_WARN(fmt, ...) -#define LOG_ERROR(fmt, ...) -#define LOG_DEBUG_BDF(dev_name, fmt, ...) \ - PMD_LOG_DEBUG(HW, fmt, ##__VA_ARGS__) - -#define LOG_INFO_BDF(dev_name, fmt, ...) \ - PMD_LOG_INFO(HW, fmt, ##__VA_ARGS__) - -#define LOG_WARN_BDF(dev_name, fmt, ...) \ - PMD_LOG_WARN(HW, fmt, ##__VA_ARGS__) - -#define LOG_ERROR_BDF(dev_name, fmt, ...) \ - PMD_LOG_ERR(HW, fmt, ##__VA_ARGS__) -#endif - -#ifdef SXE2_DPDK_DEBUG -#define LOG_DEV_DEBUG(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_INFO(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_WARN(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_DEV_ERR(fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_DEBUG(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_DEBUG_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_INFO(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_INFO_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_WARN(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_WARN_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#define LOG_MSG_ERR(msglvl, fmt, ...) \ - do { \ - RTE_SET_USED(adapter); \ - LOG_ERROR_BDF(fmt, ##__VA_ARGS__); \ - } while (0) - -#else - -#define LOG_DEV_DEBUG(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_INFO(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_WARN(fmt, ...) RTE_SET_USED(adapter) -#define LOG_DEV_ERR(fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_DEBUG(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_INFO(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_WARN(msglvl, fmt, ...) RTE_SET_USED(adapter) -#define LOG_MSG_ERR(msglvl, fmt, ...) RTE_SET_USED(adapter) -#endif - #endif /* SXE2_COMMON_LOG_H__ */ diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 1a14d401e7..cb83fb837d 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -37,7 +37,7 @@ sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -123,7 +123,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = SXE2_ERR_IO; goto l_end; @@ -233,7 +233,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -246,7 +246,7 @@ sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, goto l_end; } else if (iova_mode == RTE_IOVA_VA) { if (!cdev->config.support_iommu) { - PMD_LOG_ERR(COM, "no iommu not support va mode, plese use pa mode."); + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); ret = SXE2_ERR_IO; goto l_end; } @@ -289,7 +289,7 @@ sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) if (cdev->config.kernel_reset) { ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index fd6823fe98..23882f3f52 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 56d0a11f48..fbf4a6674f 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 61467a4e31..b331451160 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -25,6 +25,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index c1a65f25ce..68d7e36cf1 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -363,8 +367,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -475,8 +479,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = SXE2_ERR_FAULT; goto l_end; } @@ -760,6 +765,8 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; ret = sxe2_hw_init(dev); diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..3e88ab5241 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..cd9ebfa32f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(__rte_unused void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..55bea8b74c --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,782 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_TX_DEBUG("desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + SXE2_TX_STATS_CNT(txq, tx_desc_not_done, 1); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_TX_DEBUG("Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; + +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_TX_DEBUG("port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + SXE2_TX_STATS_CNT(txq, tx_pkts_num, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + } else { + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_l4_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) { + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, ip_csum_err, 1); + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) { + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + } else { + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, l4_csum_err, 1); + } + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) { + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + SXE2_RX_STATS_CNT(rxq, outer_ip_csum_err, 1); + } + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + SXE2_RX_STATS_CNT(rxq, ptype_pkts[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)], 1); + SXE2_RX_STATS_CNT(rxq, rx_pkts_num, 1); + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_RX_INFO("Rx new_mbuf_pay alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, + rxq->idx_in_pf); + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + cur_mbuf->next = new_mbuf_pay; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* Re: [PATCH v11 0/9] Add Linkdata sxe2 driver 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 ` (8 preceding siblings ...) 2026-05-07 1:44 ` [PATCH v11 9/9] drivers: add data path for Rx and Tx liujie5 @ 2026-05-07 2:40 ` Stephen Hemminger 9 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-05-07 2:40 UTC (permalink / raw) To: liujie5; +Cc: dev On Thu, 7 May 2026 09:44:40 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > V11: > - Addressed AI comments Not better. v11 is byte-for-byte identical to v10 except: The vector Rx/Tx patch (10/10) was dropped, so it's now a 9-patch series. One trailing blank line was removed from the data-path patch (now 9/9). ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback 2026-05-06 11:35 ` [PATCH v10 10/10] net/sxe2: add vectorized " liujie5 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 01/10] mailmap: add Jie Liu liujie5 ` (9 more replies) 1 sibling, 10 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch set addresses the feedback received on the v10 submission for the sxe2 PMD. The primary focus is on fixing vector path selection, ensuring memory safety during mbuf initialization, and cleaning up redundant logic in the configuration functions. v12 Changes: - Fixed vector Rx burst function being overwritten by scalar selection. - Refactored Rx/Tx mode set functions to seed flags from caps first,eliminating tautological checks. - Added memset for mbuf_def in vector init to avoid uninitialized reads. - Converted pci_map_addr_info to designated initializers. - Removed dead Windows-only code in meson.build. - Added NULL checks for mbuf free for driver-wide consistency. - Updated burst_mode_get to accurately report AVX paths. - Adjusted SXE2_ETH_OVERHEAD to match actual VLAN capabilities. Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes common/sxe2: add sxe2 basic structures drivers: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 30 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 34 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 685 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.h | 83 ++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++ drivers/common/sxe2/sxe2_type.h | 60 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 32 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 389 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 942 ++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 67 ++ drivers/net/sxe2/sxe2_queue.h | 194 +++++ drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 372 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 22 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 945 +++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 197 +++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 545 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 212 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 43 files changed, 9760 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v12 01/10] mailmap: add Jie Liu 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 02/10] doc: add sxe2 guide and release notes liujie5 ` (8 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 02/10] doc: add sxe2 guide and release notes 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 8:06 ` [PATCH v12 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 03/10] common/sxe2: add sxe2 basic structures liujie5 ` (7 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 30 +++++++++++++++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 34 ++++++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 +++ 4 files changed, 69 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..2718a702d4 --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,30 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Fast mbuf free = P +Free Tx mbuf on demand = Y +Burst mode info = Y +Queue start/stop = Y +MTU update = Y +Buffer split on Rx = P +Scattered Rx = Y +CRC offload = Y +VLAN offload = Y +QinQ offload = P +L3 checksum offload = Y +L4 checksum offload = Y +Timestamp offload = P +Inner L3 checksum = P +Inner L4 checksum = P +Rx descriptor status = Y +Tx descriptor status = Y +FreeBSD = Y +Linux = Y +x86-32 = Y +x86-64 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..7fcf9c085b --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,34 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported. + +Implementation details +---------------------- + +The sxe2 PMD is designed to operate alongside the sxe2 kernel network driver. +For management and control operations, the PMD communicates with the kernel +driver via ioctl interfaces. These commands are processed by the kernel +driver and subsequently dispatched to the hardware firmware for execution. + +For security and robustness, the driver's data path is optimized to operate +using virtual addresses (IOVA as VA mode). However, to ensure full +compatibility in system environments where an IOMMU is absent or disabled, +the driver also provides an explicit path to support physical addressing +(IOVA as PA mode). + +The hardware is capable of handling the corresponding IOVA addresses (either +VA or PA) directly, as provided by the DPDK memory subsystem. This ensures +that DPDK applications can only access memory segments explicitly allocated +to the current process, preventing unauthorized access to random physical +memory. + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 03/10] common/sxe2: add sxe2 basic structures 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 8:06 ` [PATCH v12 01/10] mailmap: add Jie Liu liujie5 2026-05-12 8:06 ` [PATCH v12 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 04/10] drivers: add base driver skeleton liujie5 ` (6 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common_log.h | 84 +++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 586 ++++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 61 ++ 6 files changed, 1584 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..a7d2157610 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#endif /* __SXE2_COMMON_LOG_H__ */ + diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..92f49e7a20 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) (((ver) >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif /* __SXE2_INTERNAL_VER_H__ */ diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..e0f4b753b2 --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,586 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + u16 tmp; + + if (unlikely(*a == *b)) + return; + tmp = *a; + *a = *b; + *b = tmp; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) \ + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..86923adf6f --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <stdio.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef int8_t s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif /* __SXE2_TYPES_H__ */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 04/10] drivers: add base driver skeleton 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (2 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 03/10] common/sxe2: add sxe2 basic structures liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 05/10] drivers: add base driver probe skeleton liujie5 ` (5 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ drivers/meson.build | 1 + 7 files changed, 1085 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..f1cc1205a0 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common.c', + 'sxe2_ioctl_chnl.c', +) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..7d4001343a --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> +#include <pthread.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const char *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const char *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = -EINVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = -EINVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const char *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = -EINVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const char *key, const char *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = -EINVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = -EINVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = -EBUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = -ENOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); + +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..d02d281a70 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const char *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..0d300e0f81 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = -EIO; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + char drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd >= 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 05/10] drivers: add base driver probe skeleton 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (3 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 04/10] drivers: add base driver skeleton liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 06/10] drivers: support PCI BAR mapping liujie5 ` (4 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.h | 2 +- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 21 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 ++++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 611 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 191 +++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 212 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 17 files changed, 2968 insertions(+), 1 deletion(-) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h index d02d281a70..090b643548 100644 --- a/drivers/common/sxe2/sxe2_common.h +++ b/drivers/common/sxe2/sxe2_common.h @@ -57,7 +57,7 @@ typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); struct sxe2_class_driver { TAILQ_ENTRY(sxe2_class_driver) next; enum sxe2_class_type drv_class; - const s8 *name; + const char *name; sxe2_class_driver_probe_t *probe; sxe2_class_driver_remove_t *remove; const struct rte_pci_id *id_table; diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 0d300e0f81..b8830039ff 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..6c9a86423a --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..78e2a30614 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = -ENOMEM; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = -EINVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = -ENOMEM; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..a6cb51789e --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,611 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = -EINVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = -ENOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = -EINVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = -EINVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..412f5d2b14 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..7fa22e2820 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,191 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..e1e0e279cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == -EPERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -ENOMEM; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); + adapter->vsi_ctxt.main_vsi = NULL; +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 06/10] drivers: support PCI BAR mapping 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (4 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (3 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 38 +++- drivers/net/sxe2/sxe2_ethdev.c | 308 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 362 insertions(+), 2 deletions(-) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index b8830039ff..8c55f5098f 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) @@ -167,7 +201,7 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) s32 ret = SXE2_SUCCESS; if (cdev->config.kernel_reset) { - ret = SXE2_ERR_PERM; + ret = -EPERM; PMD_LOG_WARN(COM, "kernel reseted, need restart app."); goto l_end; } @@ -179,7 +213,7 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) if (ret < 0) { PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", virt, len, strerror(errno)); - ret = SXE2_ERR_IO; + ret = -EIO; goto l_end; } diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index a6cb51789e..75f4c2f341 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -2,6 +2,7 @@ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. */ +#include <asm-generic/errno-base.h> #include <rte_string_fns.h> #include <ethdev_pci.h> #include <ctype.h> @@ -54,6 +55,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +167,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -287,6 +304,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -354,6 +396,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = -EFAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = -EFAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = -EFAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = -ENOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -368,6 +471,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = -EFAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -408,6 +559,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = -ENXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = -ENOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = -ENOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = -ENOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -425,6 +727,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 412f5d2b14..698e2ee4a2 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (5 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 08/10] net/sxe2: support queue setup and control liujie5 ` (2 subsequent siblings) 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 50 +++++++++- drivers/common/sxe2/sxe2_ioctl_chnl.c | 108 ++++++++++++++++++++- drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 164 insertions(+), 3 deletions(-) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 7d4001343a..e04982e92f 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -443,7 +443,7 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) cdev = sxe2_rtedev_to_cdev(&pci_dev->device); if (cdev == NULL) { ret = -ENODEV; - PMD_LOG_ERR(COM, "Fail to get remove device."); + PMD_LOG_ERR(COM, "Fail to get device when remove."); goto l_end; } @@ -467,12 +467,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get device when dma map."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to map dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get device when dma unmap."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to unmap dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 8c55f5098f..4dfc4fd0fa 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -168,7 +168,7 @@ void void *virt = NULL; if (cdev->config.kernel_reset) { - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_err; } @@ -202,7 +202,7 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) if (cdev->config.kernel_reset) { ret = -EPERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = -EIO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); + ret = -EIO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + goto l_end; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 08/10] net/sxe2: support queue setup and control 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (6 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-12 8:06 ` [PATCH v12 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_drv_cmd.h | 9 - drivers/net/sxe2/sxe2_ethdev.c | 66 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 8 files changed, 1145 insertions(+), 27 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 6c9a86423a..8638244d80 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -16,6 +16,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h index 4094442077..f236e30c40 100644 --- a/drivers/net/sxe2/sxe2_drv_cmd.h +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -5,17 +5,8 @@ #ifndef __SXE2_DRV_CMD_H__ #define __SXE2_DRV_CMD_H__ -#ifdef SXE2_DPDK_DRIVER #include "sxe2_type.h" #define SXE2_DPDK_RESOURCE_INSUFFICIENT -#endif - -#ifdef SXE2_LINUX_DRIVER -#ifdef __KERNEL__ -#include <linux/types.h> -#include <linux/if_ether.h> -#endif -#endif #define SXE2_DRV_CMD_MODULE_S (16) #define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 75f4c2f341..bb51b9fb71 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -25,6 +25,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -81,14 +83,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -107,16 +101,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -302,6 +286,14 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + .rx_queue_release = sxe2_rx_queue_release, + .tx_queue_release = sxe2_tx_queue_release, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -329,6 +321,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 698e2ee4a2..4ef7854479 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..6b42297382 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == -EPERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = -EINVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = -EINVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = -EINVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = -EINVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = -ENOMEM; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = -ENOMEM; + goto l_err_free_mbuf; + } + + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + mbuf->next = NULL; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = -ENOMEM; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = -ENOMEM; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..b043611c8d --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = -EINVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = -EINVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = -EINVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = -ENOMEM; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = -EINVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 09/10] drivers: add data path for Rx and Tx 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (7 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 8:06 ` [PATCH v12 10/10] net/sxe2: add vectorized " liujie5 9 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 1 + drivers/common/sxe2/sxe2_common_log.h | 1 - drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 8 +- drivers/common/sxe2/sxe2_osal.h | 4 +- drivers/common/sxe2/sxe2_type.h | 1 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 22 +- drivers/net/sxe2/sxe2_txrx.c | 247 +++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 945 ++++++++++++++++++++++++++ 11 files changed, 1238 insertions(+), 17 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index e04982e92f..73b288d5d8 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -664,6 +664,7 @@ sxe2_common_init(void) goto l_end; pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + sxe2_common_pci_init(); sxe2_commoin_inited = true; diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index a7d2157610..cbb53263b5 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -81,4 +81,3 @@ extern s32 sxe2_log_hw; #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") #endif /* __SXE2_COMMON_LOG_H__ */ - diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 4dfc4fd0fa..b9224cf197 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = -EIO; goto l_end; diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index e0f4b753b2..20d1accd5f 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ @@ -88,7 +86,7 @@ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) #endif -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) #define __bf_shf(x) ((uint32_t)rte_bsf64(x)) diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h index 86923adf6f..e4ef6ed2ce 100644 --- a/drivers/common/sxe2/sxe2_type.h +++ b/drivers/common/sxe2/sxe2_type.h @@ -8,7 +8,6 @@ #include <sys/time.h> #include <stdlib.h> -#include <stdio.h> #include <errno.h> #include <stdarg.h> #include <unistd.h> diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 8638244d80..b348dd71a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -18,6 +18,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index bb51b9fb71..38e3967c56 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -27,6 +27,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -132,6 +133,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -349,8 +353,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -461,8 +465,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = -EFAULT; goto l_end; } @@ -746,10 +751,17 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; - if (rte_eal_process_type() != RTE_PROC_PRIMARY) + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + if (ret != SXE2_SUCCESS) + PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret); goto l_end; + } ret = sxe2_hw_init(dev); if (ret) { diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..a7b94e8967 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,247 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..e6f671e3dc --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..02533abfd5 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,945 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_DEBUG(TX, "desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_DEBUG(TX, "Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_DEBUG(TX, "Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + else + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + else + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + else + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay = NULL; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + cur_mbuf->next = new_mbuf_pay; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + new_mbuf = NULL; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v12 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (8 preceding siblings ...) 2026-05-12 8:06 ` [PATCH v12 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-12 8:06 ` liujie5 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 9 siblings, 1 reply; 143+ messages in thread From: liujie5 @ 2026-05-12 8:06 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 7 + drivers/net/sxe2/sxe2_ethdev.c | 35 +- drivers/net/sxe2/sxe2_ethdev.h | 1 - drivers/net/sxe2/sxe2_queue.c | 28 ++ drivers/net/sxe2/sxe2_queue.h | 3 + drivers/net/sxe2/sxe2_txrx.c | 221 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 11 +- drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 197 +++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 545 ++++++++++++++++++++++++ 12 files changed, 1276 insertions(+), 82 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index b348dd71a1..3df57aee8c 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -11,6 +11,12 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +includes += include_directories('../../common/sxe2') + +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -20,6 +26,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 38e3967c56..dc4a33901d 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -59,17 +59,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -102,25 +96,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 sxe2_queues_start(struct rte_eth_dev *dev) -{ - s32 ret = SXE2_SUCCESS; - ret = sxe2_txqs_all_start(dev); - if (ret) { - PMD_LOG_ERR(INIT, "Failed to start tx queue."); - goto l_end; - } - - ret = sxe2_rxqs_all_start(dev); - if (ret) { - PMD_LOG_ERR(INIT, "Failed to start rx queue."); - sxe2_txqs_all_stop(dev); - } - -l_end: - return ret; -} - static s32 sxe2_dev_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -153,7 +128,7 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) static s32 sxe2_dev_close(struct rte_eth_dev *dev) { (void)sxe2_dev_stop(dev); - + (void)sxe2_queues_release(dev); sxe2_vsi_uninit(dev); sxe2_dev_pci_map_uinit(dev); @@ -291,13 +266,19 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + .rx_queue_start = sxe2_rx_queue_start, + .rx_queue_stop = sxe2_rx_queue_stop, + .tx_queue_start = sxe2_tx_queue_start, + .tx_queue_stop = sxe2_tx_queue_stop, .rx_queue_setup = sxe2_rx_queue_setup, - .tx_queue_setup = sxe2_tx_queue_setup, .rx_queue_release = sxe2_rx_queue_release, + .tx_queue_setup = sxe2_tx_queue_setup, .tx_queue_release = sxe2_tx_queue_release, .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 4ef7854479..43148f9b03 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -11,7 +11,6 @@ #include <rte_tm_driver.h> #include <rte_io.h> -#include "sxe2_common.h" #include "sxe2_errno.h" #include "sxe2_type.h" #include "sxe2_vsi.h" diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c index 98343679f6..b1860490aa 100644 --- a/drivers/net/sxe2/sxe2_queue.c +++ b/drivers/net/sxe2/sxe2_queue.c @@ -6,6 +6,8 @@ #include "sxe2_queue.h" #include "sxe2_common_log.h" #include "sxe2_errno.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, struct sxe2_drv_queue_caps *q_caps) @@ -37,3 +39,29 @@ s32 sxe2_queues_init(struct rte_eth_dev *dev) return ret; } + +s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } +l_end: + return ret; +} + +void sxe2_queues_release(struct rte_eth_dev *dev) +{ + sxe2_all_rxqs_release(dev); + + sxe2_all_txqs_release(dev); +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h index 7fa22e2820..93402186c7 100644 --- a/drivers/net/sxe2/sxe2_queue.h +++ b/drivers/net/sxe2/sxe2_queue.h @@ -188,4 +188,7 @@ void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, s32 sxe2_queues_init(struct rte_eth_dev *dev); +s32 sxe2_queues_start(struct rte_eth_dev *dev); + +void sxe2_queues_release(struct rte_eth_dev *dev); #endif diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index a7b94e8967..d81f0c8c98 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,117 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { + tx_mode_flags = vec_flags; +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); + } + if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + PMD_LOG_INFO(TX, "AVX2 is not supported in build env."); + } - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0)) + tx_mode_flags |= SXE2_TX_MODE_VEC_SSE; +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +263,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,55 +273,97 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; + s32 ret; + u32 vec_flags; PMD_INIT_FUNC_TRACE(); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags = vec_flags; +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) + PMD_LOG_INFO(RX, "AVX512 is not supported in build env"); + + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) + PMD_LOG_INFO(RX, "AVX2 is not supported in build env"); + + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) + rx_mode_flags |= SXE2_RX_MODE_VEC_SSE; +#endif + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) { + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; + goto l_end; + } +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index e6f671e3dc..8f929c4f19 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,17 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..30e1468020 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + + memset(&mbuf_def, 0, sizeof(mbuf_def)); + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + return; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + if (buffer[i].mbuf != NULL) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + } + i = 0; + } + for (; i < txq->next_use; ++i) { + if (buffer[i].mbuf != NULL) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + } +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..8cf11849d6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,545 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_DEBUG(TX, "Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_INFO(RX, "Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_DEBUG(RX, "port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback 2026-05-12 8:06 ` [PATCH v12 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 01/10] mailmap: add Jie Liu liujie5 ` (10 more replies) 0 siblings, 11 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch set addresses the feedback received on the v10 submission for the sxe2 PMD. The primary focus is on fixing vector path selection, ensuring memory safety during mbuf initialization, and cleaning up redundant logic in the configuration functions. v13 Changes: - Fixed vector Rx burst function being overwritten by scalar selection. - Refactored Rx/Tx mode set functions to seed flags from caps first,eliminating tautological checks. - Added memset for mbuf_def in vector init to avoid uninitialized reads. - Converted pci_map_addr_info to designated initializers. - Removed dead Windows-only code in meson.build. - Added NULL checks for mbuf free for driver-wide consistency. - Updated burst_mode_get to accurately report AVX paths. - Adjusted SXE2_ETH_OVERHEAD to match actual VLAN capabilities. Jie Liu (10): mailmap: add Jie Liu doc: add sxe2 guide and release notes common/sxe2: add sxe2 basic structures drivers: add base driver skeleton drivers: add base driver probe skeleton drivers: support PCI BAR mapping common/sxe2: add ioctl interface for DMA map and unmap net/sxe2: support queue setup and control drivers: add data path for Rx and Tx net/sxe2: add vectorized Rx and Tx .mailmap | 1 + doc/guides/nics/features/sxe2.ini | 30 + doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 34 + doc/guides/rel_notes/release_26_07.rst | 4 + drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 685 +++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 ++ drivers/common/sxe2/sxe2_common_log.h | 83 ++ drivers/common/sxe2/sxe2_errno.h | 110 +++ drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 + drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++ drivers/common/sxe2/sxe2_type.h | 60 ++ drivers/meson.build | 1 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 32 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + drivers/net/sxe2/sxe2_drv_cmd.h | 389 +++++++++ drivers/net/sxe2/sxe2_ethdev.c | 941 ++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 67 ++ drivers/net/sxe2/sxe2_queue.h | 194 +++++ drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 + drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 + drivers/net/sxe2/sxe2_txrx.c | 372 ++++++++ drivers/net/sxe2/sxe2_txrx.h | 22 + drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.c | 945 +++++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 17 + drivers/net/sxe2/sxe2_txrx_vec.c | 197 +++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 545 ++++++++++++ drivers/net/sxe2/sxe2_vsi.c | 212 +++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++ 43 files changed, 9759 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h -- 2.47.3 ^ permalink raw reply [flat|nested] 143+ messages in thread
* [PATCH v13 01/10] mailmap: add Jie Liu 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 02/10] doc: add sxe2 guide and release notes liujie5 ` (9 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- .mailmap | 1 + 1 file changed, 1 insertion(+) diff --git a/.mailmap b/.mailmap index 895412e568..d2c4485636 100644 --- a/.mailmap +++ b/.mailmap @@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com> Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com> Jie Hai <haijie1@huawei.com> Jie Liu <jie2.liu@hxt-semitech.com> +Jie Liu <liujie5@linkdatatechnology.com> Jie Pan <panjie5@jd.com> Jie Wang <jie1x.wang@intel.com> Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com> -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 02/10] doc: add sxe2 guide and release notes 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 11:36 ` [PATCH v13 01/10] mailmap: add Jie Liu liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 03/10] common/sxe2: add sxe2 basic structures liujie5 ` (8 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add a new guide for SXE2 PMD in the nics directory. The guide contains driver capabilities, prerequisites, and compilation/usage instructions. Update the release notes to announce the addition of the sxe2 network driver. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- doc/guides/nics/features/sxe2.ini | 30 +++++++++++++++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sxe2.rst | 34 ++++++++++++++++++++++++++ doc/guides/rel_notes/release_26_07.rst | 4 +++ 4 files changed, 69 insertions(+) create mode 100644 doc/guides/nics/features/sxe2.ini create mode 100644 doc/guides/nics/sxe2.rst diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini new file mode 100644 index 0000000000..2718a702d4 --- /dev/null +++ b/doc/guides/nics/features/sxe2.ini @@ -0,0 +1,30 @@ +; +; Supported features of the 'sxe2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +; A feature with "P" indicates only be supported when non-vector path +; is selected. +; +[Features] +Fast mbuf free = P +Free Tx mbuf on demand = Y +Burst mode info = Y +Queue start/stop = Y +MTU update = Y +Buffer split on Rx = P +Scattered Rx = Y +CRC offload = Y +VLAN offload = Y +QinQ offload = P +L3 checksum offload = Y +L4 checksum offload = Y +Timestamp offload = P +Inner L3 checksum = P +Inner L4 checksum = P +Rx descriptor status = Y +Tx descriptor status = Y +FreeBSD = Y +Linux = Y +x86-32 = Y +x86-64 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index cb818284fe..e20be478f8 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -68,6 +68,7 @@ Network Interface Controller Drivers rnp sfc_efx softnic + sxe2 tap thunderx txgbe diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst new file mode 100644 index 0000000000..7fcf9c085b --- /dev/null +++ b/doc/guides/nics/sxe2.rst @@ -0,0 +1,34 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +SXE2 Poll Mode Driver +====================== + +The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for +10/25/50/100 Gbps Network Adapters. +The embedded switch, Physical Functions (PF), +and SR-IOV Virtual Functions (VF) are supported. + +Implementation details +---------------------- + +The sxe2 PMD is designed to operate alongside the sxe2 kernel network driver. +For management and control operations, the PMD communicates with the kernel +driver via ioctl interfaces. These commands are processed by the kernel +driver and subsequently dispatched to the hardware firmware for execution. + +For security and robustness, the driver's data path is optimized to operate +using virtual addresses (IOVA as VA mode). However, to ensure full +compatibility in system environments where an IOMMU is absent or disabled, +the driver also provides an explicit path to support physical addressing +(IOVA as PA mode). + +The hardware is capable of handling the corresponding IOVA addresses (either +VA or PA) directly, as provided by the DPDK memory subsystem. This ensures +that DPDK applications can only access memory segments explicitly allocated +to the current process, preventing unauthorized access to random physical +memory. + +This capability allows the PMD to coexist with kernel network interfaces +which remain functional, although they stop receiving unicast packets as +long as they share the same MAC address. diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index f012d47a4b..fa0f0f5cca 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -64,6 +64,10 @@ New Features * ``--auto-probing`` enables the initial bus probing, which is the current default behavior. +* **Added Linkdata sxe2 ethernet driver.** + + Added network driver for the Linkdata Network Adapters. + Removed Items ------------- -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 03/10] common/sxe2: add sxe2 basic structures 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 11:36 ` [PATCH v13 01/10] mailmap: add Jie Liu liujie5 2026-05-12 11:36 ` [PATCH v13 02/10] doc: add sxe2 guide and release notes liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 04/10] drivers: add base driver skeleton liujie5 ` (7 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch adds the base infrastructure for the sxe2 common library. It includes the mandatory OS abstraction layer (OSAL), common structure definitions, error codes, and the logging system implementation. Specifically, this commit: - Implements the logging stream management using RTE_LOG_LINE. - Defines device-specific error codes and status registers. - Adds the initial meson build configuration for the common library. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common_log.h | 84 +++ drivers/common/sxe2/sxe2_errno.h | 113 ++++ drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++ drivers/common/sxe2/sxe2_internal_ver.h | 33 ++ drivers/common/sxe2/sxe2_osal.h | 586 ++++++++++++++++++++ drivers/common/sxe2/sxe2_type.h | 60 ++ 6 files changed, 1583 insertions(+) create mode 100644 drivers/common/sxe2/sxe2_common_log.h create mode 100644 drivers/common/sxe2/sxe2_errno.h create mode 100644 drivers/common/sxe2/sxe2_host_regs.h create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h create mode 100644 drivers/common/sxe2/sxe2_osal.h create mode 100644 drivers/common/sxe2/sxe2_type.h diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h new file mode 100644 index 0000000000..a7d2157610 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_LOG_H__ +#define __SXE2_COMMON_LOG_H__ + +#include "sxe2_type.h" + +extern s32 sxe2_common_log; +extern s32 sxe2_log_init; +extern s32 sxe2_log_driver; +extern s32 sxe2_log_rx; +extern s32 sxe2_log_tx; +extern s32 sxe2_log_hw; + +#define RTE_LOGTYPE_SXE2_COM sxe2_common_log +#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init +#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver +#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx +#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx +#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw + +#define SXE2_PMD_LOG(level, log_type, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \ + __func__, __VA_ARGS__) + +#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \ + RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \ + __func__ RTE_LOG_COMMA \ + adapter->dev_port_id, __VA_ARGS__) + +#define PMD_LOG_DEBUG(logtype, fmt, ...) \ + SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_INFO(logtype, fmt, ...) \ + SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_NOTICE(logtype, fmt, ...) \ + SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_WARN(logtype, fmt, ...) \ + SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ERR(logtype, fmt, ...) \ + SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_CRIT(logtype, fmt, ...) \ + SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_ALERT(logtype, fmt, ...) \ + SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_LOG_EMERG(logtype, fmt, ...) \ + SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \ + SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__) + +#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") + +#endif /* __SXE2_COMMON_LOG_H__ */ + diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h new file mode 100644 index 0000000000..89a715eaef --- /dev/null +++ b/drivers/common/sxe2/sxe2_errno.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_ERRNO_H__ +#define __SXE2_ERRNO_H__ +#include <errno.h> + +enum sxe2_status { + + SXE2_SUCCESS = 0, + + SXE2_ERR_PERM = -EPERM, + SXE2_ERR_NOFILE = -ENOENT, + SXE2_ERR_NOENT = -ENOENT, + SXE2_ERR_SRCH = -ESRCH, + SXE2_ERR_INTR = -EINTR, + SXE2_ERR_IO = -EIO, + SXE2_ERR_NXIO = -ENXIO, + SXE2_ERR_2BIG = -E2BIG, + SXE2_ERR_NOEXEC = -ENOEXEC, + SXE2_ERR_BADF = -EBADF, + SXE2_ERR_CHILD = -ECHILD, + SXE2_ERR_AGAIN = -EAGAIN, + SXE2_ERR_NOMEM = -ENOMEM, + SXE2_ERR_ACCES = -EACCES, + SXE2_ERR_FAULT = -EFAULT, + SXE2_ERR_BUSY = -EBUSY, + SXE2_ERR_EXIST = -EEXIST, + SXE2_ERR_XDEV = -EXDEV, + SXE2_ERR_NODEV = -ENODEV, + SXE2_ERR_NOTSUP = -ENOTSUP, + SXE2_ERR_NOTDIR = -ENOTDIR, + SXE2_ERR_ISDIR = -EISDIR, + SXE2_ERR_INVAL = -EINVAL, + SXE2_ERR_NFILE = -ENFILE, + SXE2_ERR_MFILE = -EMFILE, + SXE2_ERR_NOTTY = -ENOTTY, + SXE2_ERR_FBIG = -EFBIG, + SXE2_ERR_NOSPC = -ENOSPC, + SXE2_ERR_SPIPE = -ESPIPE, + SXE2_ERR_ROFS = -EROFS, + SXE2_ERR_MLINK = -EMLINK, + SXE2_ERR_PIPE = -EPIPE, + SXE2_ERR_DOM = -EDOM, + SXE2_ERR_RANGE = -ERANGE, + SXE2_ERR_DEADLOCK = -EDEADLK, + SXE2_ERR_DEADLK = -EDEADLK, + SXE2_ERR_NAMETOOLONG = -ENAMETOOLONG, + SXE2_ERR_NOLCK = -ENOLCK, + SXE2_ERR_NOSYS = -ENOSYS, + SXE2_ERR_NOTEMPTY = -ENOTEMPTY, + SXE2_ERR_ILSEQ = -EILSEQ, + SXE2_ERR_NODATA = -ENODATA, + SXE2_ERR_CANCELED = -ECANCELED, + SXE2_ERR_TIMEDOUT = -ETIMEDOUT, + + SXE2_ERROR = -150, + SXE2_ERR_NO_MEMORY = -151, + SXE2_ERR_HW_VERSION = -152, + SXE2_ERR_FW_VERSION = -153, + SXE2_ERR_FW_MODE = -154, + + SXE2_ERR_CMD_ERROR = -156, + SXE2_ERR_CMD_NO_MEMORY = -157, + SXE2_ERR_CMD_NOT_READY = -158, + SXE2_ERR_CMD_TIMEOUT = -159, + SXE2_ERR_CMD_CANCELED = -160, + SXE2_ERR_CMD_RETRY = -161, + SXE2_ERR_CMD_HW_CRITICAL = -162, + SXE2_ERR_CMD_NO_DATA = -163, + SXE2_ERR_CMD_INVAL_SIZE = -164, + SXE2_ERR_CMD_INVAL_TYPE = -165, + SXE2_ERR_CMD_INVAL_LEN = -165, + SXE2_ERR_CMD_INVAL_MAGIC = -166, + SXE2_ERR_CMD_INVAL_HEAD = -167, + SXE2_ERR_CMD_INVAL_ID = -168, + + SXE2_ERR_DESC_NO_DONE = -171, + + SXE2_ERR_INIT_ARGS_NAME_INVAL = -181, + SXE2_ERR_INIT_ARGS_VAL_INVAL = -182, + SXE2_ERR_INIT_VSI_CRITICAL = -183, + + SXE2_ERR_CFG_FILE_PATH = -191, + SXE2_ERR_CFG_FILE = -192, + SXE2_ERR_CFG_INVALID_SIZE = -193, + SXE2_ERR_CFG_NO_PIPELINE_CFG = -194, + + SXE2_ERR_RESET_TIMIEOUT = -200, + SXE2_ERR_VF_NOT_ACTIVE = -201, + SXE2_ERR_BUF_CSUM_ERR = -202, + SXE2_ERR_VF_DROP = -203, + + SXE2_ERR_FLOW_PARAM = -301, + SXE2_ERR_FLOW_CFG = -302, + SXE2_ERR_FLOW_CFG_NOT_SUPPORT = -303, + SXE2_ERR_FLOW_PROF_EXISTS = -304, + SXE2_ERR_FLOW_PROF_NOT_EXISTS = -305, + SXE2_ERR_FLOW_VSIG_FULL = -306, + SXE2_ERR_FLOW_VSIG_INFO = -307, + SXE2_ERR_FLOW_VSIG_NOT_FIND = -308, + SXE2_ERR_FLOW_VSIG_NOT_USED = -309, + SXE2_ERR_FLOW_VSI_NOT_IN_VSIG = -310, + SXE2_ERR_FLOW_MAX_LIMIT = -311, + + SXE2_ERR_SCHED_NEED_RECURSION = -400, + + SXE2_ERR_BFD_SESS_FLOW_HT_COLLISION = -500, + SXE2_ERR_BFD_SESS_FLOW_NOSPC = -501, +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h new file mode 100644 index 0000000000..984ea6214c --- /dev/null +++ b/drivers/common/sxe2/sxe2_host_regs.h @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_HOST_REGS_H__ +#define __SXE2_HOST_REGS_H__ + +#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s)) + +#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20)) +#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4)) +#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4)) +#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4)) +#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4)) + +#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004 +#define SXE2_RXQ_CTRL_ENABLED 0x00000001 +#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3) + +#define SXE2_PCIEPROC_BASE 0x002d6000 + +#define SXE2_PF_INT_BASE 0x00260000 +#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000) +#define SXE2_PF_INT_ALLOC_FIRST 0x7FF +#define SXE2_PF_INT_ALLOC_LAST_S 12 +#define SXE2_PF_INT_ALLOC_LAST \ + (0x7FF << SXE2_PF_INT_ALLOC_LAST_S) +#define SXE2_PF_INT_ALLOC_VALID BIT(31) + +#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040) +#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0) +#define SXE2_PF_INT_OICR_UR BIT(1) +#define SXE2_PF_INT_OICR_CA BIT(2) +#define SXE2_PF_INT_OICR_VFLR BIT(3) +#define SXE2_PF_INT_OICR_VFR_DONE BIT(4) +#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5) +#define SXE2_PF_INT_OICR_BFDE BIT(6) +#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7) +#define SXE2_PF_INT_OICR_ECC_ERR BIT(8) +#define SXE2_PF_INT_OICR_GPIO BIT(9) +#define SXE2_PF_INT_OICR_TSYN_TX BIT(11) +#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12) +#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13) +#define SXE2_PF_INT_OICR_EXHAUST BIT(14) +#define SXE2_PF_INT_OICR_FW BIT(15) +#define SXE2_PF_INT_OICR_SWINT BIT(16) +#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17) +#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18) +#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19) +#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20) +#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21) +#define SXE2_PF_INT_OICR_GRST BIT(22) +#define SXE2_PF_INT_OICR_FWQ_INT BIT(29) +#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30) +#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31) + +#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020) + +#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_FW_ABNORMAL BIT(0) +#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1) +#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18) +#define SXE2_PF_INT_VFLR_DONE BIT(2) + +#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060) +#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_OICR_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S) +#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0) +#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S) +#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0) +#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11 +#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S) +#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100) +#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x) + +#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120) +#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7 +#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4) +#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8) + +#define SXE2_VFG_RAM_INIT_DONE \ + (SXE2_PF_INT_BASE + 0x0128) +#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0) +#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1) +#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2) + +#define SXE2_LINK_REG_GET_10G_VALUE 4 +#define SXE2_LINK_REG_GET_25G_VALUE 1 +#define SXE2_LINK_REG_GET_50G_VALUE 2 +#define SXE2_LINK_REG_GET_100G_VALUE 3 + +#define SXE2_PORT0_CNT 0 +#define SXE2_PORT1_CNT 1 +#define SXE2_PORT2_CNT 2 +#define SXE2_PORT3_CNT 3 + +#define SXE2_LINK_STATUS_BASE (0x002ac200) +#define SXE2_LINK_STATUS_PORT0_POS 3 +#define SXE2_LINK_STATUS_PORT1_POS 11 +#define SXE2_LINK_STATUS_PORT2_POS 19 +#define SXE2_LINK_STATUS_PORT3_POS 27 +#define SXE2_LINK_STATUS_MASK 1 + +#define SXE2_LINK_SPEED_BASE (0x002ac200) +#define SXE2_LINK_SPEED_PORT0_POS 0 +#define SXE2_LINK_SPEED_PORT1_POS 8 +#define SXE2_LINK_SPEED_PORT2_POS 16 +#define SXE2_LINK_SPEED_PORT3_POS 24 +#define SXE2_LINK_SPEED_MASK 7 + +#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4)) +#define SXE2_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PFVP_INT_ALLOC_LAST_S 12 +#define SXE2_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4)) +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0 + +#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12 + +#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \ + (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S) +#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31) + +#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4)) +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0 +#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12 +#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S) +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16 +#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16) + +#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4)) +#define SXE2_VSI_PF_ID_S 0 +#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S) +#define SXE2_VSI_PF_EN_M BIT(3) + +#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4)) +#define SXE2_MBX_CTL_MSIX_INDX_S 0 +#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S) +#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30) + +#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx)) +#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_TQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S) +#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx)) +#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF +#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11 +#define SXE2_PF_INT_RQCTL_ITR_IDX \ + (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S) +#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30) + +#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx)) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F) +#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \ + (0x3F) +#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6)) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7) +#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \ + (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT) + +#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \ + (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx)) +#define SXE2_VF_INT_ITR_INTERVAL 0xFFF + +#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx)) +#define SXE2_VF_DYN_CTL_INTENABLE BIT(0) +#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1) +#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2) +#define SXE2_VF_DYN_CTL_ITR_IDX_S \ + 3 +#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3 +#define SXE2_VF_DYN_CTL_INTERVAL_S 5 +#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24) +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25 +#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3 + +#define SXE2_VF_DYN_CTL_INTENABLE_MSK \ + BIT(31) + +#define SXE2_BAR4_MSIX_BASE 0 +#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10)) +#define SXE2_BAR4_MSIX_ENABLE 0 +#define SXE2_BAR4_MSIX_DISABLE 1 + +#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4)) + +#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CONTEXT7_HEAD_S 0 +#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S) +#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16 +#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S) + +#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100)) +#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100)) + +#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800 +#define SXE2_TXQ_CTRL_SW_EN_M BIT(0) +#define SXE2_TXQ_CTRL_HW_EN_M BIT(1) + +#define SXE2_TXQ_CTXT2_PROT_IDX_S 0 +#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0) +#define SXE2_TXQ_CTXT2_CGD_IDX_S 4 +#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4) +#define SXE2_TXQ_CTXT2_PF_IDX_S 9 +#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9) +#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12 +#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12) +#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23 +#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23) +#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25 +#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25) +#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26 +#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26) +#define SXE2_TXQ_CTXT2_WB_MODE_S 27 +#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27) +#define SXE2_TXQ_CTXT2_ITR_WB_S 28 +#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28) +#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29 +#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29) +#define SXE2_TXQ_CTXT2_SSO_EN_S 30 +#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30) + +#define SXE2_TXQ_CTXT3_SRC_VSI_S 0 +#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0) +#define SXE2_TXQ_CTXT3_CPU_ID_S 12 +#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12) +#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20 +#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20) +#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21 +#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21) +#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22 +#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22) + +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0 +#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13 +#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13) +#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14 +#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14) +#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15 +#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15) +#define SXE2_TXQ_CTXT3_QLEN_S 16 +#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16) + +#define SXE2_RX_BUF_CHAINED_MAX 10 +#define SXE2_RX_DESC_BASE_ADDR_UNIT 7 +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) +#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7) + +enum { + SXE2_RX_CTXT0 = 0, + SXE2_RX_CTXT1, + SXE2_RX_CTXT2, + SXE2_RX_CTXT3, + SXE2_RX_CTXT4, + SXE2_RX_CTXT_CNT, +}; + +#define SXE2_RX_CTXT_BASE_L_S 0 +#define SXE2_RX_CTXT_BASE_L_W 32 + +#define SXE2_RX_CTXT_BASE_H_S 0 +#define SXE2_RX_CTXT_BASE_H_W 25 +#define SXE2_RX_CTXT_DEPTH_L_S 25 +#define SXE2_RX_CTXT_DEPTH_L_W 7 + +#define SXE2_RX_CTXT_DEPTH_H_S 0 +#define SXE2_RX_CTXT_DEPTH_H_W 6 + +#define SXE2_RX_CTXT_DBUFF_S 6 +#define SXE2_RX_CTXT_DBUFF_W 7 + +#define SXE2_RX_CTXT_HBUFF_S 13 +#define SXE2_RX_CTXT_HBUFF_W 5 + +#define SXE2_RX_CTXT_HSPLT_TYPE_S 18 +#define SXE2_RX_CTXT_HSPLT_TYPE_W 2 + +#define SXE2_RX_CTXT_DESC_TYPE_S 20 +#define SXE2_RX_CTXT_DESC_TYPE_W 1 + +#define SXE2_RX_CTXT_CRC_S 21 +#define SXE2_RX_CTXT_CRC_W 1 + +#define SXE2_RX_CTXT_L2TAG_FLAG_S 23 +#define SXE2_RX_CTXT_L2TAG_FLAG_W 1 + +#define SXE2_RX_CTXT_HSPLT_0_S 24 +#define SXE2_RX_CTXT_HSPLT_0_W 4 + +#define SXE2_RX_CTXT_HSPLT_1_S 28 +#define SXE2_RX_CTXT_HSPLT_1_W 2 + +#define SXE2_RX_CTXT_INVALN_STP_S 31 +#define SXE2_RX_CTXT_INVALN_STP_W 1 + +#define SXE2_RX_CTXT_LRO_ENABLE_S 0 +#define SXE2_RX_CTXT_LRO_ENABLE_W 1 + +#define SXE2_RX_CTXT_CPUID_S 3 +#define SXE2_RX_CTXT_CPUID_W 8 + +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11 +#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14 + +#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25 +#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4 + +#define SXE2_RX_CTXT_RELAX_DATA_S 29 +#define SXE2_RX_CTXT_RELAX_DATA_W 1 + +#define SXE2_RX_CTXT_RELAX_WB_S 30 +#define SXE2_RX_CTXT_RELAX_WB_W 1 + +#define SXE2_RX_CTXT_RELAX_RD_S 31 +#define SXE2_RX_CTXT_RELAX_RD_W 1 + +#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1 +#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2 +#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3 +#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1 + +#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4 +#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1 + +#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6 +#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3 + +#define SXE2_RX_CTXT_VF_ID_S 9 +#define SXE2_RX_CTXT_VF_ID_W 8 + +#define SXE2_RX_CTXT_PF_ID_S 17 +#define SXE2_RX_CTXT_PF_ID_W 3 + +#define SXE2_RX_CTXT_VF_ENABLE_S 20 +#define SXE2_RX_CTXT_VF_ENABLE_W 1 + +#define SXE2_RX_CTXT_VSI_ID_S 21 +#define SXE2_RX_CTXT_VSI_ID_W 10 + +#define SXE2_PF_CTRLQ_FW_BASE 0x00312000 +#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000) +#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080) +#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100) +#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180) +#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200) +#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280) +#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300) +#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380) +#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400) +#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480) + +#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000 +#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100) +#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180) +#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200) +#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280) +#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300) +#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380) +#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400) +#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480) +#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500) +#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580) + +#define SXE2_CMD_REG_LEN_M 0x3FF +#define SXE2_CMD_REG_LEN_VFE_M BIT(28) +#define SXE2_CMD_REG_LEN_OVFL_M BIT(29) +#define SXE2_CMD_REG_LEN_CRIT_M BIT(30) +#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31) + +#define SXE2_CMD_REG_HEAD_M 0x3FF + +#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500) +#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0) +#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1) + +#define SXE2_TOP_CFG_BASE 0x00292000 +#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c) +#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0) + +#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214) +#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0) +#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8) +#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16) +#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24) +#define SXE2_FW_VER_FIX_SHIFT (8) +#define SXE2_FW_VER_SUB_SHIFT (16) +#define SXE2_FW_VER_MAIN_SHIFT (24) + +#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c) + +#define SXE2_STATUS SXE2_FW_VER + +#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210) + +#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218) + +#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c) +#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0) +#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0) + +#define SXE2_TX_OE_BASE 0x00030000 +#define SXE2_RX_OE_BASE 0x00050000 + +#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4)) +#define SXE2_VSI_L2TAGSTXVALID(_i) \ + (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4)) +#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4)) +#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4)) +#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4)) +#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4)) + +#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4)) +#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4)) + +#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4)) +#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8)) +#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8)) +#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8)) +#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8)) + +#define SXE2_L2TAG_ID_STAG 0 +#define SXE2_L2TAG_ID_OUT_VLAN1 1 +#define SXE2_L2TAG_ID_OUT_VLAN2 2 +#define SXE2_L2TAG_ID_VLAN 3 + +#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF +#define SXE2_PFP_L2TAGSEN_DVM BIT(10) + +#define SXE2_VSI_TSR_STRIP_TAG_S 0 +#define SXE2_VSI_TSR_SHOW_TAG_S 4 + +#define SXE2_VSI_TSR_ID_STAG BIT(0) +#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1) +#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2) +#define SXE2_VSI_TSR_ID_VLAN BIT(3) + +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3) +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7 +#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7) +#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16 +#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19) +#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20 +#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23) + +#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2 +#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3 +#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4 + +#define SXE2_SWITCH_OG_BASE 0x00140000 +#define SXE2_SWITCH_SWE_BASE 0x00150000 +#define SXE2_SWITCH_RG_BASE 0x00160000 + +#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4)) +#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4)) + +#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1) +#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2) +#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3) +#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9) + +#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16) + +#define SXE2_PCIE_SYS_READY 0x38c +#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0) +#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2) +#define SXE2_PCIE_SYS_READY_R5 BIT(3) +#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16) + +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78 +#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21) + +#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630) +#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586 + +#define SXE2_PFGEN_CTRL (0x00336000) +#define SXE2_PFGEN_CTRL_PFSWR BIT(0) + +#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4)) +#define SXE2_VFGEN_CTRL_VFSWR BIT(0) + +#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4) +#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2)) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1) +#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10)) + +#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4)) + +#define SXE2_ACCEPT_RULE_TAGGED_S 0 +#define SXE2_ACCEPT_RULE_UNTAGGED_S 16 + +#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4)) +#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S) +#define SXE2_VF_RXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S) + +#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4)) +#define SXE2_VF_RXQ_MAPENA_M BIT(0) + +#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4)) +#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0 +#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S) +#define SXE2_VF_TXQ_BASE_Q_NUM_S 16 +#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S) + +#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4)) +#define SXE2_VF_TXQ_MAPENA_M BIT(0) + +#define PRI_PTP_BASEADDR 0x2a8000 + +#define GLTSYN (PRI_PTP_BASEADDR + 0x0) +#define GLTSYN_ENA_M BIT(0) + +#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4) +#define GLTSYN_CMD_INIT_TIME 0x01 +#define GLTSYN_CMD_INIT_INCVAL 0x02 +#define GLTSYN_CMD_ADJ_TIME 0x04 +#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C +#define GLTSYN_CMD_LATCHING_SHTIME 0x80 + +#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8) +#define GLTSYN_SYNC_PLUS_1NS 0x1 +#define GLTSYN_SYNC_MINUS_1NS 0x2 +#define GLTSYN_SYNC_EXEC 0x3 +#define GLTSYN_SYNC_GEN_PULSE 0x4 + +#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC) +#define GLTSYN_SEM_BUSY_M BIT(0) + +#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10) +#define GLTSYN_STAT_EVENT0_M BIT(0) +#define GLTSYN_STAT_EVENT1_M BIT(1) +#define GLTSYN_STAT_EVENT2_M BIT(2) + +#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20) +#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24) +#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28) +#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C) + +#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30) +#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34) +#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38) +#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C) + +#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40) +#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44) + +#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50) +#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54) + +#define GLTSYN_TGT_NS(_i) \ + (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16)) +#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16)) +#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16)) + +#define GLTSYN_EVENT_NS(_i) \ + (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16)) + +#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16)) +#define GLTSYN_EVENT_S_H_MASK (0xFFFF) + +#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16)) + +#define GLTSYN_AUXOUT(_i) \ + (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4)) +#define GLTSYN_AUXOUT_OUT_ENA BIT(0) +#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1) +#define GLTSYN_AUXOUT_OUTLVL BIT(3) +#define GLTSYN_AUXOUT_INT_ENA BIT(4) +#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3) + +#define GLTSYN_CLKO(_i) \ + (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4)) + +#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4)) +#define GLTSYN_AUXIN_RISING_EDGE BIT(0) +#define GLTSYN_AUXIN_FALLING_EDGE BIT(1) +#define GLTSYN_AUXIN_ENABLE BIT(4) + +#define CGMAC_CSR_BASE 0x2B4000 + +#define CGMAC_PORT_OFFSET 0x00004000 + +#define PFP_CGM_TX_TSMEM(_port, _i) \ + (CGMAC_CSR_BASE + 0x100 + \ + + CGMAC_PORT_OFFSET * _port + ((_i) * 4)) + +#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8)) +#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8)) + +#define CGMAC_CSR_MAC0_OFFSET 0x2B4000 +#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000)) + +#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \ + (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \ + ((_i) * 4)) + +#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8)) +#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8)) + +#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11 +#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4)) + +#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0) +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11 +#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11) +#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30) +#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4)) + +#define SXE2_IPSEC_TX_BASE (0x2A0000) +#define SXE2_IPSEC_RX_BASE (0x2A2000) + +#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084) +#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18) +#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17) +#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4) +#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2) +#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c) + +#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff) + +#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0) +#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff) + +#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090) +#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff) + +#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000) +#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894) +#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18) +#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a20 + 8 * (pri))) +#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0a60 + 8 * (pri))) +#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0aa0 + 8 * (pri))) +#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988) +#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28) +#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b30 + 8 * (pri))) +#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \ + (0x0b70 + 8 * (pri))) + +#endif diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h new file mode 100644 index 0000000000..92f49e7a20 --- /dev/null +++ b/drivers/common/sxe2/sxe2_internal_ver.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_INTERNAL_VER_H__ +#define __SXE2_INTERNAL_VER_H__ + +#define SXE2_VER_MAJOR_OFFSET (16) +#define SXE2_MK_VER(major, minor) \ + (major << SXE2_VER_MAJOR_OFFSET | minor) +#define SXE2_MK_VER_MAJOR(ver) (((ver) >> SXE2_VER_MAJOR_OFFSET) & 0xff) +#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff) + +#define SXE2_ITR_VER_MAJOR_V100 1 +#define SXE2_ITR_VER_MAJOR_V200 2 + +#define SXE2_ITR_VER_MAJOR 1 +#define SXE2_ITR_VER_MINOR 1 +#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR) + +#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100) +#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200) + +#define SXE2LIB_ITR_VER_MAJOR 1 +#define SXE2LIB_ITR_VER_MINOR 1 +#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR) + +#define SXE2_DRV_CLI_VER_MAJOR 1 +#define SXE2_DRV_CLI_VER_MINOR 1 +#define SXE2_DRV_CLI_VER \ + SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR) + +#endif /* __SXE2_INTERNAL_VER_H__ */ diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h new file mode 100644 index 0000000000..d77057e7ee --- /dev/null +++ b/drivers/common/sxe2/sxe2_osal.h @@ -0,0 +1,586 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_OSAL_H__ +#define __SXE2_OSAL_H__ +#include <string.h> +#include <stdint.h> +#include <stdarg.h> +#include <inttypes.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_cycles.h> +#include <rte_malloc.h> +#include <rte_ether.h> +#include <rte_version.h> + +#include "sxe2_type.h" + +#define BIT(nr) (1UL << (nr)) +#ifndef __BITS_PER_LONG +#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8) +#endif +#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG) +#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG)) + +#ifndef BIT_ULL +#define BIT_ULL(a) (1ULL << (a)) +#endif + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +#define BITS_PER_BYTE 8 + +#define IS_UNICAST_ETHER_ADDR(addr) \ + ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0)) + +#define STRUCT_SIZE(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + +#ifndef TAILQ_FOREACH_SAFE +#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \ + for ((var) = TAILQ_FIRST((head)); \ + (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \ + (var) = (tvar)) +#endif + +#define SXE2_QUEUE_WAIT_RETRY_CNT (50) + +#define __iomem + +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define dma_addr_t rte_iova_t + +#define resource_size_t u64 + +#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f) +#define ARRAY_SIZE(arr) RTE_DIM(arr) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +#define CPU_TO_BE16(o) rte_cpu_to_be_16(o) +#define CPU_TO_BE32(o) rte_cpu_to_be_32(o) +#define CPU_TO_BE64(o) rte_cpu_to_be_64(o) +#define BE16_TO_CPU(o) rte_be_to_cpu_16(o) + +#define NTOHS(a) rte_be_to_cpu_16(a) +#define NTOHL(a) rte_be_to_cpu_32(a) +#define HTONS(a) rte_cpu_to_be_16(a) +#define HTONL(a) rte_cpu_to_be_32(a) + +#define udelay(x) rte_delay_us(x) + +#define mdelay(x) rte_delay_us(1000 * (x)) + +#define msleep(x) rte_delay_us(1000 * (x)) + +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) \ + (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d)) +#endif + +#define usleep_range(min) msleep(DIV_ROUND_UP(min, 1000)) + +#define __bf_shf(x) ((uint32_t)rte_bsf64(x)) + +#ifndef BITS_PER_LONG +#define BITS_PER_LONG 32 +#endif + +#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask)) +#define FIELD_GET(_mask, _reg) ((typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask))) + +#define SXE2_NUM_ROUND_UP(n, d) (DIV_ROUND_UP(n, d) * d) + +static inline void sxe2_swap_u16(u16 *a, u16 *b) +{ + u16 tmp; + + if (unlikely(*a == *b)) + return; + tmp = *a; + *a = *b; + *b = tmp; +} + +#define SXE2_SWAP_U16(a, b) sxe2_swap_u16(a, b) + +enum sxe2_itr_idx { + SXE2_ITR_IDX_0 = 0, + SXE2_ITR_IDX_1, + SXE2_ITR_IDX_2, + SXE2_ITR_IDX_NONE, +}; + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) unlikely((uintptr_t)(void *)(x) >= (uintptr_t)-MAX_ERRNO) +static inline bool IS_ERR(const void *ptr) +{ + return IS_ERR_VALUE((uintptr_t)ptr); +} + +#define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) + +#define SXE2_CTXT_REG_VALUE(value, shift, width) ((value << shift) & \ + (((1ULL << width) - 1) << shift)) + +#define ETH_P_8021Q 0x8100 +#define ETH_P_8021AD 0x88a8 +#define ETH_P_QINQ1 0x9100 + +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) + +#define sxe2_init_lock(sp) rte_spinlock_init(&(sp)->spinlock) +#define sxe2_acquire_lock(sp) rte_spinlock_lock(&(sp)->spinlock) +#define sxe2_release_lock(sp) rte_spinlock_unlock(&(sp)->spinlock) +#define sxe2_destroy_lock(sp) RTE_SET_USED(sp) + +#define COMPILER_BARRIER() \ + { asm volatile("" ::: "memory"); } + +struct sxe2_list_head_type { + struct sxe2_list_head_type *next, *prev; +}; + +#define LIST_HEAD_TYPE sxe2_list_head_type + +#define SXE2_LIST_ENTRY(ptr, type, member) container_of(ptr, type, member) +#define LIST_FIRST_ENTRY(ptr, type, member) \ + SXE2_LIST_ENTRY((ptr)->next, type, member) +#define LIST_NEXT_ENTRY(pos, member) \ + SXE2_LIST_ENTRY((pos)->member.next, typeof(*(pos)), member) + +static inline void INIT_LIST_HEAD(struct LIST_HEAD_TYPE *list) +{ + list->next = list; + COMPILER_BARRIER(); + list->prev = list; + COMPILER_BARRIER(); +} + +static inline void sxe2_list_add(struct LIST_HEAD_TYPE *curr, + struct LIST_HEAD_TYPE *prev, + struct LIST_HEAD_TYPE *next) +{ + next->prev = curr; + curr->next = next; + curr->prev = prev; + COMPILER_BARRIER(); + prev->next = curr; + COMPILER_BARRIER(); +} + +#define LIST_ADD(entry, head) sxe2_list_add(entry, (head), (head)->next) +#define LIST_ADD_TAIL(entry, head) sxe2_list_add(entry, (head)->prev, head) + +static inline void __list_del(struct LIST_HEAD_TYPE *prev, struct LIST_HEAD_TYPE *next) +{ + next->prev = prev; + COMPILER_BARRIER(); + prev->next = next; + COMPILER_BARRIER(); +} + +static inline void __list_del_entry(struct LIST_HEAD_TYPE *entry) +{ + __list_del(entry->prev, entry->next); +} +#define LIST_DEL(entry) __list_del_entry(entry) + +static inline bool __list_is_empty(const struct LIST_HEAD_TYPE *head) +{ + COMPILER_BARRIER(); + return head->next == head; +} + +#define LIST_IS_EMPTY(head) __list_is_empty(head) + +#define LIST_FOR_EACH_ENTRY(pos, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member); \ + &pos->member != (head); \ + pos = LIST_NEXT_ENTRY(pos, member)) + +#define LIST_FOR_EACH_ENTRY_SAFE(pos, n, head, member) \ + for (pos = LIST_FIRST_ENTRY(head, typeof(*pos), member), \ + n = LIST_NEXT_ENTRY(pos, member); \ + &pos->member != (head); \ + pos = n, n = LIST_NEXT_ENTRY(n, member)) + +struct sxe2_blk_list_head_type { + struct sxe2_blk_list_head_type *next_blk; + struct sxe2_blk_list_head_type *next; + u16 blk_size; + u16 blk_id; +}; + +#define BLK_LIST_HEAD_TYPE sxe2_blk_list_head_type + +static inline void sxe2_blk_list_add(struct BLK_LIST_HEAD_TYPE *node, + struct BLK_LIST_HEAD_TYPE *head) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + + while (curr != NULL && curr->blk_id < node->blk_id) { + prev = curr; + curr = curr->next_blk; + } + + if (prev != head && prev->blk_id + prev->blk_size == node->blk_id) { + prev->blk_size += node->blk_size; + node->blk_size = 0; + } else { + node->next_blk = curr; + prev->next_blk = node; + } + + node = (node->blk_size == 0) ? prev : node; + + if (curr) { + + if (node->blk_id + node->blk_size == curr->blk_id) { + node->blk_size += curr->blk_size; + curr->blk_size = 0; + node->next_blk = curr->next_blk; + } else { + node->next_blk = curr; + } + } +} + +static inline struct BLK_LIST_HEAD_TYPE *sxe2_blk_list_get( + struct BLK_LIST_HEAD_TYPE *head, u16 blk_size) +{ + struct BLK_LIST_HEAD_TYPE *curr = head->next_blk; + struct BLK_LIST_HEAD_TYPE *prev = head; + struct BLK_LIST_HEAD_TYPE *blk_max_node = curr; + struct BLK_LIST_HEAD_TYPE *blk_max_node_pre = head; + struct BLK_LIST_HEAD_TYPE *ret = NULL; + s32 i = blk_size; + + while (curr && curr->blk_size != blk_size) { + if (curr->blk_size > blk_max_node->blk_size) { + blk_max_node = curr; + blk_max_node_pre = prev; + } + prev = curr; + curr = curr->next_blk; + } + + if (curr != NULL) { + prev->next_blk = curr->next_blk; + ret = curr; + goto l_end; + } + + if (blk_max_node->blk_size < blk_size) + goto l_end; + + ret = blk_max_node; + prev = blk_max_node_pre; + + curr = blk_max_node; + while (i != 0) { + curr = curr->next; + i--; + } + curr->blk_size = blk_max_node->blk_size - blk_size; + blk_max_node->blk_size = blk_size; + prev->next_blk = curr; + +l_end: + return ret; +} + +#define BLK_LIST_ADD(entry, head) sxe2_blk_list_add(entry, head) +#define BLK_LIST_GET(head, blk_size) sxe2_blk_list_get(head, blk_size) + +#ifndef BIT_ULL +#define BIT_ULL(nr) (ULL(1) << (nr)) +#endif + +static inline bool check_is_pow2(u64 val) +{ + return (val && !(val & (val - 1))); +} + +static inline u8 sxe2_setbit_cnt8(u8 num) +{ + u8 bits = 0; + u32 i; + + for (i = 0; i < 8; i++) { + bits += (num & 0x1); + num >>= 1; + } + + return bits; +} + +static inline bool max_set_bit_check(const u8 *mask, u16 size, u16 max) +{ + u16 count = 0; + u16 i; + bool ret = false; + + for (i = 0; i < size; i++) { + if (!mask[i]) + continue; + + if (count == max) + goto l_end; + + count += sxe2_setbit_cnt8(mask[i]); + if (count > max) + goto l_end; + } + + ret = true; +l_end: + return ret; +} + +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long)) +#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32) + +#define GENMASK(h, l) \ + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) + +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1))) + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define BITMAP_MEM_ALIGNMENT 8 +#else +#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long)) +#endif +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#define DECLARE_BITMAP(name, bits) \ + unsigned long name[BITS_TO_LONGS(bits)] +#define BITMAP_TYPE unsigned long +#define small_const_nbits(nbits) \ + (__rte_constant(nbits) && (nbits) <= __BITS_PER_LONG && (nbits) > 0) + +static inline void set_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); +} + +static inline void clear_bit(u32 nr, unsigned long *addr) +{ + addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); +} + +static inline u32 test_bit(u32 nr, const volatile unsigned long *addr) +{ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1))); +} + +static inline u32 bitmap_weight(const unsigned long *src, u32 nbits) +{ + u32 cnt = 0; + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + cnt++; + } + return cnt; +} + +static inline bool bitmap_empty(const unsigned long *src, u32 nbits) +{ + u16 i; + for (i = 0; i < nbits; i++) { + if (test_bit(i, src)) + return false; + } + return true; +} + +static inline void bitmap_zero(unsigned long *dst, u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memset(dst, 0, len); +} + +static bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_and(dst, src1, src2, nbits); +} + +static void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, int bits) +{ + int k; + int nr = BITS_TO_LONGS(bits); + + for (k = 0; k < nr; k++) + dst[k] = bitmap1[k] | bitmap2[k]; +} + +static inline void bitmap_or(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + *dst = *src1 | *src2; + else + __bitmap_or(dst, src1, src2, nbits); +} + +static int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k; + u32 lim = bits/__BITS_PER_LONG; + unsigned long result = 0; + + for (k = 0; k < lim; k++) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k]); + if (bits % __BITS_PER_LONG) + result |= (dst[k] = bitmap1[k] & ~bitmap2[k] & + BITMAP_LAST_WORD_MASK(bits)); + return result != 0; +} + +static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; + return __bitmap_andnot(dst, src1, src2, nbits); +} + +static bool __bitmap_equal(const unsigned long *bitmap1, + const unsigned long *bitmap2, u32 bits) +{ + u32 k, lim = bits/__BITS_PER_LONG; + for (k = 0; k < lim; ++k) + if (bitmap1[k] != bitmap2[k]) + return false; + + if (bits % __BITS_PER_LONG) + if ((bitmap1[k] ^ bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits)) + return false; + + return true; +} + +static inline bool bitmap_equal(const unsigned long *src1, + const unsigned long *src2, u32 nbits) +{ + if (small_const_nbits(nbits)) + return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)); + if (__rte_constant(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + return !memcmp(src1, src2, nbits / 8); + return __bitmap_equal(src1, src2, nbits); +} + +static inline unsigned long +find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + + for (i = offset; i < size; i++) { + if (test_bit(i, addr)) + break; + } + return i; +} + +static inline unsigned long +find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + u16 i; + for (i = offset; i < size; i++) { + if (!test_bit(i, addr)) + break; + } + return i; +} + +static inline void bitmap_copy(unsigned long *dst, const unsigned long *src, + u32 nbits) +{ + u32 len = BITS_TO_LONGS(nbits) * sizeof(unsigned long); + memcpy(dst, src, len); +} + +static inline unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_zero_bit(addr, size, 0); +} + +static inline unsigned long find_first_bit(const unsigned long *addr, unsigned long size) +{ + return find_next_bit(addr, size, 0); +} + +#define for_each_clear_bit(bit, addr, size) \ + for ((bit) = find_first_zero_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_zero_bit((addr), (size), (bit) + 1)) + +#define for_each_set_bit(bit, addr, size) \ + for ((bit) = find_first_bit((addr), (size)); \ + (bit) < (size); \ + (bit) = find_next_bit((addr), (size), (bit) + 1)) + +struct sxe2_adapter; + +static inline void *sxe2_malloc(__rte_unused struct sxe2_adapter *ad, size_t size) +{ + return rte_zmalloc(NULL, size, 0); +} + +static inline void *sxe2_calloc(__rte_unused struct sxe2_adapter *ad, size_t num, size_t size) +{ + return rte_calloc(NULL, num, size, 0); +} + +static inline void sxe2_free(__rte_unused struct sxe2_adapter *ad, void *ptr) +{ + rte_free(ptr); +} + +static inline void *sxe2_memdup(__rte_unused struct sxe2_adapter *ad, + const void *src, size_t size) +{ + void *p; + + p = sxe2_malloc(ad, size); + if (p) + rte_memcpy(p, src, size); + return p; +} + +#endif diff --git a/drivers/common/sxe2/sxe2_type.h b/drivers/common/sxe2/sxe2_type.h new file mode 100644 index 0000000000..e4ef6ed2ce --- /dev/null +++ b/drivers/common/sxe2/sxe2_type.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TYPES_H__ +#define __SXE2_TYPES_H__ + +#include <sys/time.h> + +#include <stdlib.h> +#include <errno.h> +#include <stdarg.h> +#include <unistd.h> +#include <string.h> +#include <stdint.h> + +#if defined __BYTE_ORDER__ +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BYTE_ORDER +#if __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#endif +#elif defined __BIG_ENDIAN__ +#define __BIG_ENDIAN_BITFIELD +#elif defined __LITTLE_ENDIAN__ +#define __LITTLE_ENDIAN_BITFIELD +#elif defined RTE_TOOLCHAIN_MSVC +#define __LITTLE_ENDIAN_BITFIELD +#else +#error "Unknown endianness." +#endif +typedef uint8_t u8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef uint64_t u64; + +typedef int8_t s8; +typedef int16_t s16; +typedef int32_t s32; +typedef int64_t s64; + +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 + +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#define STATIC static + +#define ETH_ALEN 6 + +#endif /* __SXE2_TYPES_H__ */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 04/10] drivers: add base driver skeleton 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (2 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 03/10] common/sxe2: add sxe2 basic structures liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 05/10] drivers: add base driver probe skeleton liujie5 ` (6 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the sxe2 PMD skeleton by implementing the PCI probe and remove functions. This includes the setup and cleanup of a character device used for control path communication between the user space and the hardware. The character device provides an interface for ioctl-based management operations, supporting device-specific configuration. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/meson.build | 15 + drivers/common/sxe2/sxe2_common.c | 636 +++++++++++++++++++++ drivers/common/sxe2/sxe2_common.h | 86 +++ drivers/common/sxe2/sxe2_ioctl_chnl.c | 161 ++++++ drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++++ drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 45 ++ drivers/meson.build | 1 + 7 files changed, 1085 insertions(+) create mode 100644 drivers/common/sxe2/meson.build create mode 100644 drivers/common/sxe2/sxe2_common.c create mode 100644 drivers/common/sxe2/sxe2_common.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build new file mode 100644 index 0000000000..f1cc1205a0 --- /dev/null +++ b/drivers/common/sxe2/meson.build @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2023 Corigine, Inc. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +deps += ['bus_pci', 'net', 'eal', 'ethdev'] + +sources = files( + 'sxe2_common.c', + 'sxe2_ioctl_chnl.c', +) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c new file mode 100644 index 0000000000..62bdc93b5c --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.c @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_version.h> +#include <rte_pci.h> +#include <rte_dev.h> +#include <rte_devargs.h> +#include <rte_class.h> +#include <rte_malloc.h> +#include <rte_errno.h> +#include <rte_fbarray.h> +#include <rte_eal.h> +#include <eal_private.h> +#include <eal_memcfg.h> +#include <bus_driver.h> +#include <bus_pci_driver.h> +#include <eal_export.h> +#include <pthread.h> + +#include "sxe2_errno.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl_func.h" + +static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list = + TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list); + +static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list = + TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list); + +static pthread_mutex_t sxe2_common_devices_list_lock; + +static struct rte_pci_id *sxe2_common_pci_id_table; + +static const struct { + const char *name; + u32 class_type; +} sxe2_class_types[] = { + { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH }, + { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA }, +}; + +static u32 sxe2_class_name_to_value(const char *class_name) +{ + u32 class_type = SXE2_CLASS_TYPE_INVALID; + u32 i; + + for (i = 0; i < RTE_DIM(sxe2_class_types); i++) { + if (strcmp(class_name, sxe2_class_types[i].name) == 0) + class_type = sxe2_class_types[i].class_type; + } + + return class_type; +} + +static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev) +{ + struct sxe2_common_device *cdev = NULL; + + TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) { + if (rte_dev == cdev->dev) + goto l_end; + } + + cdev = NULL; +l_end: + return cdev; +} + +static struct sxe2_class_driver *sxe2_class_driver_get(u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + + TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) { + if (cdrv->drv_class == class_type) + goto l_end; + } + + cdrv = NULL; +l_end: + return cdrv; +} + +static s32 sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info, + const struct rte_devargs *devargs) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + s32 ret = SXE2_ERROR; + u32 i; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + ret = -EINVAL; + goto l_end; + } + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (pair->value == NULL || *(pair->value) == '\0') { + PMD_LOG_ERR(COM, "Key %s has no value.", pair->key); + rte_kvargs_free(kvlist); + ret = -EINVAL; + goto l_end; + } + } + + kv_info->kvlist = kvlist; + ret = SXE2_SUCCESS; + PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.", + kv_info->kvlist->count); +l_end: + return ret; +} + +static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info) +{ + if ((kv_info != NULL) && (kv_info->kvlist != NULL)) { + rte_kvargs_free(kv_info->kvlist); + kv_info->kvlist = NULL; + } +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process) +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const char *const key_match, arg_handler_t handler, void *opaque_arg) +{ + const struct rte_kvargs_pair *pair; + struct rte_kvargs *kvlist; + u32 i; + s32 ret = SXE2_SUCCESS; + + if ((kv_info == NULL) || (kv_info->kvlist == NULL) || + (key_match == NULL)) { + PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter."); + ret = -EINVAL; + goto l_end; + } + kvlist = kv_info->kvlist; + + for (i = 0; i < kvlist->count; i++) { + pair = &kvlist->pairs[i]; + if (strcmp(pair->key, key_match) == 0) { + ret = (*handler)(pair->key, pair->value, opaque_arg); + if (ret) + goto l_end; + + kv_info->is_used[i] = true; + break; + } + } + +l_end: + return ret; +} + +static s32 sxe2_parse_class_type(const char *key, const char *value, void *args) +{ + u32 *class_type = (u32 *)args; + s32 ret = SXE2_SUCCESS; + + *class_type = sxe2_class_name_to_value(value); + if (*class_type == SXE2_CLASS_TYPE_INVALID) { + ret = -EINVAL; + PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value); + } + + return ret; +} + +static s32 sxe2_common_device_setup(struct sxe2_common_device *cdev) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_drv_dev_open(cdev, pci_dev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret); + goto l_end; + } + + ret = sxe2_drv_dev_handshark(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret); + goto l_close_dev; + } + + goto l_end; + +l_close_dev: + sxe2_drv_dev_close(cdev); +l_end: + return ret; +} + +static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (TAILQ_EMPTY(&sxe2_common_devices_list)) + (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL); + + sxe2_drv_dev_close(cdev); +} + +static struct sxe2_common_device *sxe2_common_device_alloc( + struct rte_device *rte_dev, u32 class_type) +{ + struct sxe2_common_device *cdev = NULL; + + cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0); + if (cdev == NULL) { + PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device."); + goto l_end; + } + cdev->dev = rte_dev; + cdev->class_type = class_type; + cdev->config.kernel_reset = false; + rte_ticketlock_init(&cdev->config.lock); + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + +l_end: + return cdev; +} + +static void sxe2_common_device_free(struct sxe2_common_device *cdev) +{ + + (void)pthread_mutex_lock(&sxe2_common_devices_list_lock); + TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next); + (void)pthread_mutex_unlock(&sxe2_common_devices_list_lock); + + rte_free(cdev); +} + +static bool sxe2_dev_is_pci(const struct rte_device *dev) +{ + return strcmp(dev->bus->name, "pci") == 0; +} + +static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv, + const struct rte_device *dev) +{ + const struct rte_pci_device *pci_dev; + const struct rte_pci_id *id_table; + bool ret = false; + + if (!sxe2_dev_is_pci(dev)) { + PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name); + goto l_end; + } + + pci_dev = RTE_DEV_TO_PCI_CONST(dev); + for (id_table = cdrv->id_table; id_table->vendor_id != 0; + id_table++) { + + if (id_table->vendor_id != pci_dev->id.vendor_id && + id_table->vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->device_id != pci_dev->id.device_id && + id_table->device_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_vendor_id != + pci_dev->id.subsystem_vendor_id && + id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) { + continue; + } + if (id_table->subsystem_device_id != + pci_dev->id.subsystem_device_id && + id_table->subsystem_device_id != RTE_PCI_ANY_ID) { + + continue; + } + if (id_table->class_id != pci_dev->id.class_id && + id_table->class_id != RTE_CLASS_ANY_ID) { + continue; + } + ret = true; + break; + } + +l_end: + return ret; +} + +static s32 sxe2_classes_driver_probe(struct sxe2_common_device *cdev, + struct sxe2_dev_kvargs_info *kv_info, u32 class_type) +{ + struct sxe2_class_driver *cdrv = NULL; + s32 ret = SXE2_ERROR; + + cdrv = sxe2_class_driver_get(class_type); + if (cdrv == NULL) { + PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type); + goto l_end; + } + + if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) { + PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name); + goto l_end; + } + + ret = cdrv->probe(cdev, kv_info); + if (ret) { + + PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name); + goto l_end; + } + + cdev->cdrv = cdrv; +l_end: + return ret; +} + +static s32 sxe2_classes_driver_remove(struct sxe2_common_device *cdev) +{ + struct sxe2_class_driver *cdrv = cdev->cdrv; + + return cdrv->remove(cdev); +} + +static s32 sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info) +{ + s32 ret = SXE2_SUCCESS; + u32 i; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + if (kv_info == NULL) + goto l_end; + + for (i = 0; i < kv_info->kvlist->count; i++) { + if (kv_info->is_used[i] == 0) { + PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.", + kv_info->kvlist->pairs[i].key); + ret = -EINVAL; + goto l_end; + } + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct rte_device *rte_dev = &pci_dev->device; + struct sxe2_common_device *cdev; + struct sxe2_dev_kvargs_info *kv_info_p = NULL; + + u32 class_type = SXE2_CLASS_TYPE_ETH; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name); + + cdev = sxe2_rtedev_to_cdev(rte_dev); + if (cdev != NULL) { + PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name); + ret = -EBUSY; + goto l_end; + } + + if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) { + kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info)); + if (!kv_info_p) { + PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info"); + goto l_end; + } + + ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported device args: %s", + rte_dev->devargs->args); + goto l_free_kvargs; + } + + ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS, + sxe2_parse_class_type, &class_type); + if (ret < 0) { + PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s", + rte_dev->devargs->args); + goto l_free_args; + } + + } + + cdev = sxe2_common_device_alloc(rte_dev, class_type); + if (cdev == NULL) { + ret = -ENOMEM; + goto l_free_args; + } + + ret = sxe2_common_device_setup(cdev); + if (ret != SXE2_SUCCESS) + goto l_err_setup; + + ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type); + if (ret != SXE2_SUCCESS) + goto l_err_probe; + + ret = sxe2_kvargs_validate(kv_info_p); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Device args validate failed: %s", + rte_dev->devargs->args); + goto l_err_valid; + } + cdev->kvargs = kv_info_p; + + goto l_end; +l_err_valid: + (void)sxe2_classes_driver_remove(cdev); +l_err_probe: + sxe2_common_device_cleanup(cdev); +l_err_setup: + sxe2_common_device_free(cdev); +l_free_args: + sxe2_kvargs_free(kv_info_p); +l_free_kvargs: + free(kv_info_p); +l_end: + return ret; +} + +static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name); + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get remove device."); + goto l_end; + } + + ret = sxe2_classes_driver_remove(cdev); + if (ret != SXE2_SUCCESS) { + PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name); + goto l_end; + } + + sxe2_common_device_cleanup(cdev); + + if (cdev->kvargs != NULL) { + sxe2_kvargs_free(cdev->kvargs); + free(cdev->kvargs); + cdev->kvargs = NULL; + } + + sxe2_common_device_free(cdev); + +l_end: + return ret; +} + +static struct rte_pci_driver sxe2_common_pci_driver = { + .driver = { + .name = SXE2_COMMON_PCI_DRIVER_NAME, + }, + .probe = sxe2_common_pci_probe, + .remove = sxe2_common_pci_remove, +}; + +static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) +{ + u32 table_size = 0; + + while (id_table->vendor_id != 0) { + table_size++; + id_table++; + } + + return table_size; +} + +static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id, + const struct rte_pci_id *id_table, u32 next_idx) +{ + s32 current_size = next_idx - 1; + s32 i; + bool exists = false; + + for (i = 0; i < current_size; i++) { + if ((id->device_id == id_table[i].device_id) && + (id->vendor_id == id_table[i].vendor_id) && + (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) && + (id->subsystem_device_id == id_table[i].subsystem_device_id)) { + exists = true; + break; + } + } + + return exists; +} + +static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table, + u32 *next_idx, const struct rte_pci_id *insert_table) +{ + for (; insert_table->vendor_id != 0; insert_table++) { + if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) { + + id_table[*next_idx] = *insert_table; + (*next_idx)++; + } + } +} + +static s32 sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table) +{ + const struct rte_pci_id *id_iter; + struct rte_pci_id *updated_table; + struct rte_pci_id *old_table; + u32 num_ids = 0; + u32 i = 0; + s32 ret = SXE2_SUCCESS; + + old_table = sxe2_common_pci_id_table; + if (old_table) + num_ids = sxe2_common_pci_id_table_size_get(old_table); + + num_ids += sxe2_common_pci_id_table_size_get(id_table); + + num_ids += 1; + + updated_table = calloc(num_ids, sizeof(*updated_table)); + if (!updated_table) { + PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); + goto l_end; + } + + if (old_table == NULL) { + + for (id_iter = id_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + } else { + + for (id_iter = old_table; id_iter->vendor_id != 0; + id_iter++, i++) + updated_table[i] = *id_iter; + + sxe2_common_pci_id_insert(updated_table, &i, id_table); + } + + updated_table[i].vendor_id = 0; + sxe2_common_pci_driver.id_table = updated_table; + sxe2_common_pci_id_table = updated_table; + free(old_table); + +l_end: + return ret; +} + +static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver) +{ + if (driver->id_table != NULL) { + if (sxe2_common_pci_id_table_update(driver->id_table) != 0) + return; + } + + if (driver->intr_lsc) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC; + if (driver->intr_rmv) + sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register) +void +sxe2_class_driver_register(struct sxe2_class_driver *driver) +{ + sxe2_common_driver_on_register_pci(driver); + TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next); +} + +static void sxe2_common_pci_init(void) +{ + const struct rte_pci_id empty_table[] = { + { + .vendor_id = 0 + }, + }; + s32 ret = SXE2_ERROR; + + if (sxe2_common_pci_id_table == NULL) { + ret = sxe2_common_pci_id_table_update(empty_table); + if (ret != SXE2_SUCCESS) + goto l_end; + } + rte_pci_register(&sxe2_common_pci_driver); + +l_end: + return; +} + +static bool sxe2_commoin_inited; + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init) +void +sxe2_common_init(void) +{ + if (sxe2_commoin_inited) + goto l_end; + + pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + sxe2_common_pci_init(); + sxe2_commoin_inited = true; + +l_end: + return; +} + +RTE_FINI(sxe2_common_pci_finish) +{ + if (sxe2_common_pci_id_table != NULL) { + rte_pci_unregister(&sxe2_common_pci_driver); + free(sxe2_common_pci_id_table); + } +} + +RTE_PMD_EXPORT_NAME(sxe2_common_pci); + +RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE); diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h new file mode 100644 index 0000000000..d02d281a70 --- /dev/null +++ b/drivers/common/sxe2/sxe2_common.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_COMMON_H__ +#define __SXE2_COMMON_H__ + +#include <rte_bitops.h> +#include <rte_kvargs.h> +#include <rte_compat.h> +#include <rte_memory.h> +#include <rte_ticketlock.h> + +#include "sxe2_type.h" + +#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci" + +#define SXE2_CDEV_TO_CMD_FD(cdev) \ + ((cdev)->config.cmd_fd) + +#define SXE2_DEVARGS_KEY_CLASS "class" + +struct sxe2_class_driver; + +enum sxe2_class_type { + SXE2_CLASS_TYPE_ETH = 0, + SXE2_CLASS_TYPE_VDPA, + SXE2_CLASS_TYPE_INVALID, +}; + +struct sxe2_common_dev_config { + s32 cmd_fd; + bool support_iommu; + bool kernel_reset; + rte_ticketlock_t lock; +}; + +struct sxe2_common_device { + struct rte_device *dev; + TAILQ_ENTRY(sxe2_common_device) next; + struct sxe2_class_driver *cdrv; + enum sxe2_class_type class_type; + struct sxe2_common_dev_config config; + struct sxe2_dev_kvargs_info *kvargs; +}; + +struct sxe2_dev_kvargs_info { + struct rte_kvargs *kvlist; + bool is_used[RTE_KVARGS_MAX]; +}; + +typedef s32 (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev, + struct sxe2_dev_kvargs_info *kvargs); + +typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); + +struct sxe2_class_driver { + TAILQ_ENTRY(sxe2_class_driver) next; + enum sxe2_class_type drv_class; + const s8 *name; + sxe2_class_driver_probe_t *probe; + sxe2_class_driver_remove_t *remove; + const struct rte_pci_id *id_table; + u32 intr_lsc; + u32 intr_rmv; +}; + +__rte_internal +void +sxe2_common_mem_event_cb(enum rte_mem_event type, + const void *addr, size_t size, void *arg __rte_unused); + +__rte_internal +void +sxe2_class_driver_register(struct sxe2_class_driver *driver); + +__rte_internal +void +sxe2_common_init(void); + +__rte_internal +s32 +sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info, + const char *const key_match, arg_handler_t handler, void *opaque_arg); + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c new file mode 100644 index 0000000000..0d300e0f81 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <unistd.h> +#include <inttypes.h> +#include <rte_version.h> +#include <eal_export.h> + +#include "sxe2_osal.h" +#include "sxe2_errno.h" +#include "sxe2_common_log.h" +#include "sxe2_ioctl_chnl.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-" + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close) +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev) +{ + cdev->config.kernel_reset = true; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec) +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params) +{ + s32 cmd_fd; + s32 ret = -EIO; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]" + "opcode[0x%x] req_len[%d] resp_len[%d]", + cmd_fd, cmd_params->trace_id, cmd_params->opcode, + cmd_params->req_len, cmd_params->resp_len); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s", + cmd_fd, cmd_params->opcode, ret, strerror(errno)); + ret = -errno; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open) +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev) +{ + s32 ret = SXE2_SUCCESS; + s32 fd = 0; + char drv_name[32] = {0}; + + snprintf(drv_name, sizeof(drv_name), + "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8, + SXE2_CHR_DEV_NAME, + pci_dev->addr.domain, + pci_dev->addr.bus, + pci_dev->addr.devid, + pci_dev->addr.function); + + fd = open(drv_name, O_RDWR); + if (fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s", + drv_name, ret, strerror(errno)); + goto l_end; + } + + SXE2_CDEV_TO_CMD_FD(cdev) = fd; + + PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d", + drv_name, SXE2_CDEV_TO_CMD_FD(cdev)); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close) +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev) +{ + s32 fd = SXE2_CDEV_TO_CMD_FD(cdev); + + if (fd >= 0) + close(fd); + PMD_LOG_INFO(COM, "closed device fd=%d", fd); + SXE2_CDEV_TO_CMD_FD(cdev) = -1; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark) +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_cmd_common_hdr cmd_params; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr)); + cmd_params.dpdk_ver = SXE2_COM_VER; + cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr); + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + + if (cmd_params.cap & BIT(SXE2_COM_CAP_IOMMU_MAP)) + cdev->config.support_iommu = true; + else + cdev->config.support_iommu = false; + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h new file mode 100644 index 0000000000..eedb3d6693 --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_H__ +#define __SXE2_IOCTL_CHNL_H__ + +#ifdef SXE2_DPDK_DRIVER + +#include <rte_version.h> +#include <bus_pci_driver.h> +#include "sxe2_type.h" +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/ioctl.h> +#endif +#endif + +#include "sxe2_internal_ver.h" + +#define SXE2_COM_INVAL_U32 0xFFFFFFFF + +#define SXE2_COM_PCI_OFFSET_SHIFT 40 + +#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << SXE2_COM_PCI_OFFSET_SHIFT) +#define SXE2_COM_PCI_OFFSET_MASK (((u64)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1) +#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((u64)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \ + (((u64)(off)) & SXE2_COM_PCI_OFFSET_MASK)) + +#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU + +#define SXE2_DRV_CMD_DFLT_TIMEOUT (30) + +#define SXE2_COM_VER_MAJOR 1 +#define SXE2_COM_VER_MINOR 0 +#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR) + +enum SXE2_COM_CMD { + SXE2_DEVICE_HANDSHAKE = 1, + SXE2_DEVICE_IO_IRQS_REQ, + SXE2_DEVICE_EVT_IRQ_REQ, + SXE2_DEVICE_RST_IRQ_REQ, + SXE2_DEVICE_EVT_CAUSE_GET, + SXE2_DEVICE_DMA_MAP, + SXE2_DEVICE_DMA_UNMAP, + SXE2_DEVICE_PASSTHROUGH, + SXE2_DEVICE_MAX, +}; + +#define SXE2_CMD_TYPE 'S' + +#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE) +#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ) +#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ) +#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ) +#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET) +#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP) +#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP) +#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH) + +enum sxe2_com_cap { + SXE2_COM_CAP_IOMMU_MAP = 0, +}; + +struct sxe2_ioctl_cmd_common_hdr { + u32 dpdk_ver; + u32 drv_ver; + u32 msg_len; + u32 cap; + u8 reserved[32]; +}; + +struct sxe2_drv_cmd_params { + u64 trace_id; + u32 timeout; + u32 opcode; + u16 vsi_id; + u16 repr_id; + u32 req_len; + u32 resp_len; + void *req_data; + void *resp_data; + u8 resv[32]; +}; + +struct sxe2_ioctl_irq_set { + u32 cnt; + u8 resv[4]; + u32 base_irq_in_com; + s32 *event_fd; +}; + +enum sxe2_com_event_cause { + SXE2_COM_EC_LINK_CHG = 0, + SXE2_COM_SW_MODE_LEGACY, + SXE2_COM_SW_MODE_SWITCHDEV, + SXE2_COM_FC_ST_CHANGE, + + SXE2_COM_EC_RESET = 62, + SXE2_COM_EC_MAX = 63, +}; + +struct sxe2_ioctl_other_evt_set { + s32 eventfd; + u8 resv[4]; + u64 filter_table; +}; + +struct sxe2_ioctl_other_evt_get { + u64 evt_cause; + u8 resv[8]; +}; + +struct sxe2_ioctl_reset_sub_set { + s32 eventfd; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_map { + u64 vaddr; + u64 iova; + u64 size; + u8 resv[4]; +}; + +struct sxe2_ioctl_iommu_dma_unmap { + u64 iova; +}; + +union sxe2_drv_trace_info { + u64 id; + struct { + u64 count : 54; + u64 cpu_id : 10; + } sxe2_drv_trace_id_param; +}; + +#endif diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h new file mode 100644 index 0000000000..0c3cb9caea --- /dev/null +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IOCTL_CHNL_FUNC_H__ +#define __SXE2_IOCTL_CHNL_FUNC_H__ + +#include <rte_version.h> +#include <bus_pci_driver.h> + +#include "sxe2_type.h" +#include "sxe2_common.h" +#include "sxe2_ioctl_chnl.h" + +#ifdef __cplusplus +extern "C" { +#endif + +__rte_internal +void +sxe2_drv_cmd_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_cmd_exec(struct sxe2_common_device *cdev, + struct sxe2_drv_cmd_params *cmd_params); + +__rte_internal +s32 +sxe2_drv_dev_open(struct sxe2_common_device *cdev, + struct rte_pci_device *pci_dev); + +__rte_internal +void +sxe2_drv_dev_close(struct sxe2_common_device *cdev); + +__rte_internal +s32 +sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/drivers/meson.build b/drivers/meson.build index 6ae102e943..d4ae512bae 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -12,6 +12,7 @@ subdirs = [ 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'common/zsda', # depends on bus. + 'common/sxe2', # depends on bus. 'mempool', # depends on common and bus. 'dma', # depends on common and bus. 'net', # depends on common, bus, mempool -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 05/10] drivers: add base driver probe skeleton 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (3 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 04/10] drivers: add base driver skeleton liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 06/10] drivers: support PCI BAR mapping liujie5 ` (5 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Initialize the eth_dev_ops for the sxe2 PMD. This includes the implementation of mandatory ethdev operations such as dev_configure, dev_start, dev_stop, and dev_infos_get. Set up the basic infrastructure for device initialization to allow the driver to be recognized as a valid ethernet device within the DPDK framework. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.h | 2 +- drivers/common/sxe2/sxe2_ioctl_chnl.c | 27 + drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 + drivers/net/meson.build | 1 + drivers/net/sxe2/meson.build | 21 + drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++++++ drivers/net/sxe2/sxe2_cmd_chnl.h | 33 ++ drivers/net/sxe2/sxe2_drv_cmd.h | 398 ++++++++++++++ drivers/net/sxe2/sxe2_ethdev.c | 611 +++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 295 ++++++++++ drivers/net/sxe2/sxe2_irq.h | 49 ++ drivers/net/sxe2/sxe2_queue.c | 39 ++ drivers/net/sxe2/sxe2_queue.h | 191 +++++++ drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++++++++ drivers/net/sxe2/sxe2_txrx_poll.h | 16 + drivers/net/sxe2/sxe2_vsi.c | 212 +++++++ drivers/net/sxe2/sxe2_vsi.h | 205 +++++++ 17 files changed, 2968 insertions(+), 1 deletion(-) create mode 100644 drivers/net/sxe2/meson.build create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h create mode 100644 drivers/net/sxe2/sxe2_ethdev.c create mode 100644 drivers/net/sxe2/sxe2_ethdev.h create mode 100644 drivers/net/sxe2/sxe2_irq.h create mode 100644 drivers/net/sxe2/sxe2_queue.c create mode 100644 drivers/net/sxe2/sxe2_queue.h create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h create mode 100644 drivers/net/sxe2/sxe2_vsi.c create mode 100644 drivers/net/sxe2/sxe2_vsi.h diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h index d02d281a70..090b643548 100644 --- a/drivers/common/sxe2/sxe2_common.h +++ b/drivers/common/sxe2/sxe2_common.h @@ -57,7 +57,7 @@ typedef s32 (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev); struct sxe2_class_driver { TAILQ_ENTRY(sxe2_class_driver) next; enum sxe2_class_type drv_class; - const s8 *name; + const char *name; sxe2_class_driver_probe_t *probe; sxe2_class_driver_remove_t *remove; const struct rte_pci_id *id_table; diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 0d300e0f81..4b041765de 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -159,3 +159,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) +{ + s32 ret = SXE2_SUCCESS; + + if (cdev->config.kernel_reset) { + ret = SXE2_ERR_PERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + virt, len); + + ret = munmap(virt, len); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + virt, len, strerror(errno)); + ret = SXE2_ERR_IO; + goto l_end; + } + +l_end: + return ret; +} diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 0c3cb9caea..376c5e3ac7 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -38,6 +38,15 @@ __rte_internal s32 sxe2_drv_dev_handshark(struct sxe2_common_device *cdev); +__rte_internal +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, + u64 len, u64 offset); + +__rte_internal +s32 +sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); + #ifdef __cplusplus } #endif diff --git a/drivers/net/meson.build b/drivers/net/meson.build index c7dae4ad27..4e8ccb945f 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -58,6 +58,7 @@ drivers = [ 'rnp', 'sfc', 'softnic', + 'sxe2', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build new file mode 100644 index 0000000000..6c9a86423a --- /dev/null +++ b/drivers/net/sxe2/meson.build @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + +if is_windows + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +cflags += ['-g'] + +deps += ['common_sxe2', 'hash','cryptodev','security'] + +sources += files( + 'sxe2_ethdev.c', + 'sxe2_cmd_chnl.c', + 'sxe2_vsi.c', + 'sxe2_queue.c', +) + +allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c new file mode 100644 index 0000000000..78e2a30614 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.c @@ -0,0 +1,319 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ioctl_chnl_func.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static union sxe2_drv_trace_info sxe2_drv_trace_id; + +static void sxe2_drv_trace_id_alloc(u64 *trace_id) +{ + union sxe2_drv_trace_info *trace = NULL; + u64 trace_id_count = 0; + + trace = &sxe2_drv_trace_id; + + trace_id_count = trace->sxe2_drv_trace_id_param.count; + ++trace_id_count; + trace->sxe2_drv_trace_id_param.count = + (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK); + + *trace_id = trace->id; +} + +static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter, + struct sxe2_drv_cmd_params *cmd, u32 opc, const char *opc_str, + void *in_data, u32 in_len, void *out_data, u32 out_len) +{ + PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str); + cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT; + cmd->opcode = opc; + cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + cmd->repr_id = (adapter->repr_priv_data != NULL) ? + adapter->repr_priv_data->repr_id : 0xFFFF; + cmd->req_len = in_len; + cmd->req_data = in_data; + cmd->resp_len = out_len; + cmd->resp_data = out_data; + + sxe2_drv_trace_id_alloc(&cmd->trace_id); +} + +#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \ + __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len) + + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS, + NULL, 0, dev_caps, + sizeof(struct sxe2_drv_dev_caps_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO, + NULL, 0, dev_info_resp, + sizeof(struct sxe2_drv_dev_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO, + NULL, 0, dev_fw_info_resp, + sizeof(struct sxe2_drv_dev_fw_info_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret); + + return ret; +} + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_req = {0}; + struct sxe2_drv_vsi_create_req_resp vsi_resp = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt); + vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func; + vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt; + vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE, + &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp), + &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp)); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret); + goto l_end; + } + + vsi->vsi_id = vsi_resp.vsi_id; + vsi->vsi_type = vsi_resp.vsi_type; + +l_end: + return ret; +} + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_vsi_free_req vsi_req = {0}; + + vsi_req.vsi_id = vsi->vsi_id; + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE, + &vsi_req, sizeof(struct sxe2_drv_vsi_free_req), + NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret); + + return ret; +} + +#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7) +#define SXE2_RX_HDR_SIZE 256 + +static s32 sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq, + struct sxe2_drv_rxq_cfg_req *req, u16 rxq_cnt) +{ + struct sxe2_adapter *adapter = rxq->vsi->adapter; + struct sxe2_drv_rxq_ctxt *ctxt = req->cfg; + struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data; + s32 ret = SXE2_SUCCESS; + + req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + req->q_cnt = rxq_cnt; + req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD; + + ctxt->queue_id = rxq->queue_id; + ctxt->depth = rxq->ring_depth; + ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN); + ctxt->dma_addr = rxq->base_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) { + ctxt->lro_en = 1; + ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size; + } else { + ctxt->lro_en = 0; + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + ctxt->keep_crc_en = 1; + else + ctxt->keep_crc_en = 0; + + ctxt->desc_size = sizeof(union sxe2_rx_desc); + return ret; +} + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_rxq_cfg_req *req = NULL; + u16 len = 0; + + len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt); + req = rte_zmalloc("sxe2_rxq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(RX, "rxq cfg mem alloc failed"); + ret = -ENOMEM; + goto l_end; + } + + ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + ret = -EINVAL; + goto l_end; + } + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq, + struct sxe2_drv_txq_cfg_req *req, u16 txq_cnt) +{ + struct sxe2_drv_txq_ctxt *ctxt = req->cfg; + u16 q_idx = 0; + + req->vsi_id = txq->vsi->vsi_id; + req->q_cnt = txq_cnt; + + for (q_idx = 0; q_idx < txq_cnt; q_idx++) { + ctxt = &req->cfg[q_idx]; + ctxt->depth = txq[q_idx].ring_depth; + ctxt->dma_addr = txq[q_idx].base_addr; + ctxt->queue_id = txq[q_idx].queue_id; + } +} + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_txq_cfg_req *req; + u16 len = 0; + + len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt); + req = rte_zmalloc("sxe2_txq_cfg", len, 0); + if (req == NULL) { + PMD_LOG_ERR(TX, "txq cfg mem alloc failed"); + ret = -ENOMEM; + goto l_end; + } + + sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt); + + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE, + req, len, NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret); + +l_end: + if (req) + rte_free(req); + return ret; +} + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id); + req.q_idx = rxq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) + PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d", + enable, ret); + + return ret; +} + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_common_device *cdev = adapter->cdev; + struct sxe2_drv_cmd_params param = {0}; + struct sxe2_drv_q_switch_req req; + + req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id); + req.q_idx = txq->queue_id; + + req.is_enable = (u8)enable; + sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE, + &req, sizeof(req), NULL, 0); + + ret = sxe2_drv_cmd_exec(cdev, ¶m); + if (ret) { + PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d", + enable, ret); + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h new file mode 100644 index 0000000000..200fe0be00 --- /dev/null +++ b/drivers/net/sxe2/sxe2_cmd_chnl.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_CMD_CHNL_H__ +#define __SXE2_CMD_CHNL_H__ + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_ioctl_chnl_func.h" + +s32 sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps); + +s32 sxe2_drv_dev_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_info_resp *dev_info_resp); + +s32 sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp); + +s32 sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi); + +s32 sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable); + +s32 sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable); + +s32 sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, u16 rxq_cnt); + +s32 sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, u16 txq_cnt); + +#endif diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h new file mode 100644 index 0000000000..4094442077 --- /dev/null +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -0,0 +1,398 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_DRV_CMD_H__ +#define __SXE2_DRV_CMD_H__ + +#ifdef SXE2_DPDK_DRIVER +#include "sxe2_type.h" +#define SXE2_DPDK_RESOURCE_INSUFFICIENT +#endif + +#ifdef SXE2_LINUX_DRIVER +#ifdef __KERNEL__ +#include <linux/types.h> +#include <linux/if_ether.h> +#endif +#endif + +#define SXE2_DRV_CMD_MODULE_S (16) +#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) + +#define SXE2_DEV_CAPS_OFFLOAD_L2 BIT(0) +#define SXE2_DEV_CAPS_OFFLOAD_VLAN BIT(1) +#define SXE2_DEV_CAPS_OFFLOAD_RSS BIT(2) +#define SXE2_DEV_CAPS_OFFLOAD_IPSEC BIT(3) +#define SXE2_DEV_CAPS_OFFLOAD_FNAV BIT(4) +#define SXE2_DEV_CAPS_OFFLOAD_TM BIT(5) +#define SXE2_DEV_CAPS_OFFLOAD_PTP BIT(6) +#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP BIT(7) +#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE BIT(8) + +#define SXE2_TXQ_STATS_MAP_MAX_NUM 16 +#define SXE2_RXQ_STATS_MAP_MAX_NUM 4 +#define SXE2_RXQ_MAP_Q_MAX_NUM 256 + +#define SXE2_STAT_MAP_INVALID_QID 0xFFFF + +#define SXE2_SCHED_MODE_DEFAULT 0 +#define SXE2_SCHED_MODE_TM 1 +#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2 +#define SXE2_SCHED_MODE_INVALID 3 + +#define SXE2_SRCVSI_PRUNE_MAX_NUM 2 + +#define SXE2_PTYPE_UNKNOWN BIT(0) +#define SXE2_PTYPE_L2_ETHER BIT(1) +#define SXE2_PTYPE_L3_IPV4 BIT(2) +#define SXE2_PTYPE_L3_IPV6 BIT(4) +#define SXE2_PTYPE_L4_TCP BIT(6) +#define SXE2_PTYPE_L4_UDP BIT(7) +#define SXE2_PTYPE_L4_SCTP BIT(8) +#define SXE2_PTYPE_INNER_L2_ETHER BIT(9) +#define SXE2_PTYPE_INNER_L3_IPV4 BIT(10) +#define SXE2_PTYPE_INNER_L3_IPV6 BIT(12) +#define SXE2_PTYPE_INNER_L4_TCP BIT(14) +#define SXE2_PTYPE_INNER_L4_UDP BIT(15) +#define SXE2_PTYPE_INNER_L4_SCTP BIT(16) +#define SXE2_PTYPE_TUNNEL_GRENAT BIT(17) + +#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER) +#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6) +#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \ + SXE2_PTYPE_L4_SCTP) +#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER) +#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \ + SXE2_PTYPE_INNER_L3_IPV6) +#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \ + SXE2_PTYPE_INNER_L4_UDP | \ + SXE2_PTYPE_INNER_L4_SCTP) +#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT) + +enum sxe2_dev_type { + SXE2_DEV_T_PF = 0, + SXE2_DEV_T_VF, + SXE2_DEV_T_PF_BOND, + SXE2_DEV_T_MAX, +}; + +struct sxe2_drv_queue_caps { + __le16 queues_cnt; + __le16 base_idx_in_pf; +}; + +struct sxe2_drv_msix_caps { + __le16 msix_vectors_cnt; + __le16 base_idx_in_func; +}; + +struct sxe2_drv_rss_hash_caps { + __le16 hash_key_size; + __le16 lut_key_size; +}; + +enum sxe2_vf_vsi_valid { + SXE2_VF_VSI_BOTH = 0, + SXE2_VF_VSI_ONLY_DPDK, + SXE2_VF_VSI_ONLY_KERNEL, + SXE2_VF_VSI_MAX, +}; + +struct sxe2_drv_vsi_caps { + __le16 func_id; + __le16 dpdk_vsi_id; + __le16 kernel_vsi_id; + __le16 vsi_type; +}; + +struct sxe2_drv_representor_caps { + __le16 cnt_repr_vf; + u8 rsv[2]; + struct sxe2_drv_vsi_caps repr_vf_id[256]; +}; + +enum sxe2_phys_port_name_type { + SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0, + SXE2_PHYS_PORT_NAME_TYPE_LEGACY, + SXE2_PHYS_PORT_NAME_TYPE_UPLINK, + SXE2_PHYS_PORT_NAME_TYPE_PFVF, + + SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN, +}; + +struct sxe2_switchdev_mode_info { + u8 pf_id; + u8 is_switchdev; + u8 rsv[2]; +}; + +struct sxe2_switchdev_cpvsi_info { + __le16 cp_vsi_id; + u8 rsv[2]; +}; + +struct sxe2_txsch_caps { + u8 layer_cap; + u8 tm_mid_node_num; + u8 prio_num; + u8 rev; +}; + +struct sxe2_drv_dev_caps_resp { + struct sxe2_drv_queue_caps queue_caps; + struct sxe2_drv_msix_caps msix_caps; + struct sxe2_drv_rss_hash_caps rss_hash_caps; + struct sxe2_drv_vsi_caps vsi_caps; + struct sxe2_txsch_caps txsch_caps; + struct sxe2_drv_representor_caps repr_caps; + u8 port_idx; + u8 pf_idx; + u8 dev_type; + u8 rev; + __le32 cap_flags; +}; + +struct sxe2_drv_dev_info_resp { + __le64 dsn; + __le16 vsi_id; + u8 rsv[2]; + u8 mac_addr[ETH_ALEN]; + u8 rsv2[2]; +}; + +struct sxe2_drv_dev_fw_info_resp { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_drv_rxq_ctxt { + __le64 dma_addr; + __le32 max_lro_size; + __le32 split_type_mask; + __le16 hdr_len; + __le16 buf_len; + __le16 depth; + __le16 queue_id; + u8 lro_en; + u8 keep_crc_en; + u8 split_en; + u8 desc_size; +}; + +struct sxe2_drv_rxq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + __le16 max_frame_size; + u8 rsv[2]; + struct sxe2_drv_rxq_ctxt cfg[]; +}; + +struct sxe2_drv_txq_ctxt { + __le64 dma_addr; + __le32 sched_mode; + __le16 queue_id; + __le16 depth; + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_txq_cfg_req { + __le16 q_cnt; + __le16 vsi_id; + struct sxe2_drv_txq_ctxt cfg[]; +}; + +struct sxe2_drv_q_switch_req { + __le16 q_idx; + __le16 vsi_id; + u8 is_enable; + u8 sched_mode; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_create_req_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +struct sxe2_drv_vsi_free_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_req { + __le16 vsi_id; + u8 rsv[2]; +}; + +struct sxe2_drv_vsi_info_get_resp { + __le16 vsi_id; + __le16 vsi_type; + struct sxe2_drv_queue_caps used_queues; + struct sxe2_drv_msix_caps used_msix; +}; + +enum sxe2_drv_cmd_module { + SXE2_DRV_CMD_MODULE_HANDSHAKE = 0, + SXE2_DRV_CMD_MODULE_DEV = 1, + SXE2_DRV_CMD_MODULE_VSI = 2, + SXE2_DRV_CMD_MODULE_QUEUE = 3, + SXE2_DRV_CMD_MODULE_STATS = 4, + SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5, + SXE2_DRV_CMD_MODULE_RSS = 6, + SXE2_DRV_CMD_MODULE_FLOW = 7, + SXE2_DRV_CMD_MODULE_TM = 8, + SXE2_DRV_CMD_MODULE_IPSEC = 9, + SXE2_DRV_CMD_MODULE_PTP = 10, + + SXE2_DRV_CMD_MODULE_VLAN = 11, + SXE2_DRV_CMD_MODULE_RDMA = 12, + SXE2_DRV_CMD_MODULE_LINK = 13, + SXE2_DRV_CMD_MODULE_MACADDR = 14, + SXE2_DRV_CMD_MODULE_PROMISC = 15, + + SXE2_DRV_CMD_MODULE_LED = 16, + SXE2_DEV_CMD_MODULE_OPT = 17, + SXE2_DEV_CMD_MODULE_SWITCH = 18, + SXE2_DRV_CMD_MODULE_ACL = 19, + SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20, + SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21, + + SXE2_DRV_CMD_MODULE_SCHED = 22, + + SXE2_DRV_CMD_MODULE_IRQ = 23, + + SXE2_DRV_CMD_MODULE_OPT = 24, +}; + +enum sxe2_drv_cmd_code { + SXE2_DRV_CMD_HANDSHAKE_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1), + SXE2_DRV_CMD_HANDSHAKE_DISABLE, + + SXE2_DRV_CMD_DEV_GET_CAPS = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1), + SXE2_DRV_CMD_DEV_GET_INFO, + SXE2_DRV_CMD_DEV_GET_FW_INFO, + SXE2_DRV_CMD_DEV_RESET, + SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO, + + SXE2_DRV_CMD_VSI_CREATE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1), + SXE2_DRV_CMD_VSI_FREE, + SXE2_DRV_CMD_VSI_INFO_GET, + SXE2_DRV_CMD_VSI_SRCVSI_PRUNE, + SXE2_DRV_CMD_VSI_FC_GET, + + SXE2_DRV_CMD_RX_MAP_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1), + SXE2_DRV_CMD_TX_MAP_SET, + SXE2_DRV_CMD_TX_RX_MAP_GET, + SXE2_DRV_CMD_TX_RX_MAP_RESET, + SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR, + + SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1), + SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE, + SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE, + SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE, + SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE, + + SXE2_DRV_CMD_RXQ_CFG_ENABLE = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1), + SXE2_DRV_CMD_TXQ_CFG_ENABLE, + SXE2_DRV_CMD_RXQ_DISABLE, + SXE2_DRV_CMD_TXQ_DISABLE, + + SXE2_DRV_CMD_VSI_STATS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1), + SXE2_DRV_CMD_VSI_STATS_CLEAR, + SXE2_DRV_CMD_MAC_STATS_GET, + SXE2_DRV_CMD_MAC_STATS_CLEAR, + + SXE2_DRV_CMD_RSS_KEY_SET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1), + SXE2_DRV_CMD_RSS_LUT_SET, + SXE2_DRV_CMD_RSS_FUNC_SET, + SXE2_DRV_CMD_RSS_HF_ADD, + SXE2_DRV_CMD_RSS_HF_DEL, + SXE2_DRV_CMD_RSS_HF_CLEAR, + + SXE2_DRV_CMD_FLOW_FILTER_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1), + SXE2_DRV_CMD_FLOW_FILTER_DEL, + SXE2_DRV_CMD_FLOW_FILTER_CLEAR, + SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC, + SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE, + SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY, + + SXE2_DRV_CMD_DEL_TM_ROOT = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1), + SXE2_DRV_CMD_ADD_TM_ROOT, + SXE2_DRV_CMD_ADD_TM_NODE, + SXE2_DRV_CMD_ADD_TM_QUEUE, + + SXE2_DRV_CMD_GET_PTP_CLOCK = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1), + + SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1), + SXE2_DRV_CMD_VLAN_FILTER_SWITCH, + SXE2_DRV_CMD_VLAN_OFFLOAD_CFG, + SXE2_DRV_CMD_VLAN_PORTVLAN_CFG, + SXE2_DRV_CMD_VLAN_CFG_QUERY, + + SXE2_DRV_CMD_RDMA_DUMP_PCAP = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1), + + SXE2_DRV_CMD_LINK_STATUS_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1), + + SXE2_DRV_CMD_MAC_ADDR_UC = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1), + SXE2_DRV_CMD_MAC_ADDR_MC, + + SXE2_DRV_CMD_PROMISC_CFG = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1), + SXE2_DRV_CMD_ALLMULTI_CFG, + + SXE2_DRV_CMD_LED_CTRL = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1), + + SXE2_DRV_CMD_OPT_EEP = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1), + + SXE2_DRV_CMD_SWITCH = + SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1), + SXE2_DRV_CMD_SWITCH_UPLINK, + SXE2_DRV_CMD_SWITCH_REPR, + SXE2_DRV_CMD_SWITCH_MODE, + SXE2_DRV_CMD_SWITCH_CPVSI, + + SXE2_DRV_CMD_UDPTUNNEL_ADD = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1), + SXE2_DRV_CMD_UDPTUNNEL_DEL, + SXE2_DRV_CMD_UDPTUNNEL_GET, + + SXE2_DRV_CMD_IPSEC_CAP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1), + SXE2_DRV_CMD_IPSEC_TXSA_ADD, + SXE2_DRV_CMD_IPSEC_RXSA_ADD, + SXE2_DRV_CMD_IPSEC_TXSA_DEL, + SXE2_DRV_CMD_IPSEC_RXSA_DEL, + SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR, + + SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1), + + SXE2_DRV_CMD_OPT_EEP_GET = + SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1), + +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c new file mode 100644 index 0000000000..a6cb51789e --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -0,0 +1,611 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_string_fns.h> +#include <ethdev_pci.h> +#include <ctype.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <unistd.h> +#include <rte_tailq.h> +#include <rte_version.h> +#include <bus_pci_driver.h> +#include <dev_driver.h> +#include <ethdev_driver.h> +#include <rte_ethdev.h> +#include <rte_alarm.h> +#include <rte_dev_info.h> +#include <rte_pci.h> +#include <rte_mbuf_dyn.h> +#include <rte_cycles.h> +#include <rte_eal_paging.h> + +#include "sxe2_ethdev.h" +#include "sxe2_drv_cmd.h" +#include "sxe2_cmd_chnl.h" +#include "sxe2_common.h" +#include "sxe2_common_log.h" +#include "sxe2_host_regs.h" +#include "sxe2_ioctl_chnl_func.h" + +#define SXE2_PCI_VENDOR_ID_1 0x1ff2 +#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1 +#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2 + +#define SXE2_PCI_VENDOR_ID_2 0x1d94 +#define SXE2_PCI_DEVICE_ID_PF_2 0x1260 +#define SXE2_PCI_DEVICE_ID_VF_2 0x126f + +#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3 +#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4 + +#define SXE2_PCI_VENDOR_ID_206F 0x206f + +static const struct rte_pci_id pci_id_sxe2_tbl[] = { + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)}, + { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)}, + { .vendor_id = 0, }, +}; + +static s32 sxe2_dev_configure(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + return ret; +} + +static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) +{ +} + +static s32 sxe2_dev_stop(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (adapter->started == 0) + goto l_end; + + sxe2_txqs_all_stop(dev); + sxe2_rxqs_all_stop(dev); + + dev->data->dev_started = 0; + adapter->started = 0; +l_end: + return ret; +} + +static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } + +l_end: + return ret; +} + +static s32 sxe2_dev_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_queues_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to init queues."); + goto l_end; + } + + ret = sxe2_queues_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "enable queues failed"); + goto l_end; + } + + dev->data->dev_started = 1; + adapter->started = 1; + goto l_end; + +l_end: + return ret; +} + +static s32 sxe2_dev_close(struct rte_eth_dev *dev) +{ + (void)sxe2_dev_stop(dev); + + sxe2_vsi_uninit(dev); + + return SXE2_SUCCESS; +} + +static s32 sxe2_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + + dev_info->max_rx_queues = vsi->rxqs.q_cnt; + dev_info->max_tx_queues = vsi->txqs.q_cnt; + dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE; + dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX; + dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM; + dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD; + dev_info->min_mtu = RTE_ETHER_MIN_MTU; + + dev_info->rx_offload_capa = + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + + dev_info->tx_offload_capa = + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->rx_queue_offload_capa = + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_KEEP_CRC | + RTE_ETH_RX_OFFLOAD_SCATTER | + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->tx_queue_offload_capa = + RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | + RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = SXE2_DEFAULT_RX_PTHRESH, + .hthresh = SXE2_DEFAULT_RX_HTHRESH, + .wthresh = SXE2_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = SXE2_DEFAULT_TX_PTHRESH, + .hthresh = SXE2_DEFAULT_TX_HTHRESH, + .wthresh = SXE2_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = SXE2_MAX_RING_DESC, + .nb_min = SXE2_MIN_RING_DESC, + .nb_align = SXE2_ALIGN, + .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX, + .nb_seg_max = SXE2_MAX_RING_DESC, + }; + + dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + + dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST; + dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST; + dev_info->default_rxportconf.nb_queues = 1; + dev_info->default_txportconf.nb_queues = 1; + dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN; + dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN; + + dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG; + + dev_info->rx_seg_capa.multi_pools = true; + + dev_info->rx_seg_capa.offset_allowed = false; + + dev_info->rx_seg_capa.offset_align_log2 = false; + + return SXE2_SUCCESS; +} + +static const struct eth_dev_ops sxe2_eth_dev_ops = { + .dev_configure = sxe2_dev_configure, + .dev_start = sxe2_dev_start, + .dev_stop = sxe2_dev_stop, + .dev_close = sxe2_dev_close, + .dev_infos_get = sxe2_dev_infos_get, +}; + +static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, + struct sxe2_drv_dev_caps_resp *dev_caps) +{ + adapter->port_idx = dev_caps->port_idx; + + adapter->cap_flags = 0; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP; + + if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE) + adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE; +} + +static s32 sxe2_func_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + struct sxe2_drv_dev_caps_resp dev_caps = {0}; + + ret = sxe2_drv_dev_caps_get(adapter, &dev_caps); + if (ret) + goto l_end; + + adapter->dev_type = dev_caps.dev_type; + + sxe2_drv_dev_caps_set(adapter, &dev_caps); + + sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps); + + sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps); + +l_end: + return ret; +} + +static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_ERROR; + + ret = sxe2_func_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret); + + return ret; +} + +static s32 sxe2_hw_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = SXE2_ERROR; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_dev_caps_get(adapter); + if (ret) + PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret); + + return ret; +} + +static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = + SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_dev_info *dev_info = &adapter->dev_info; + struct sxe2_drv_dev_info_resp dev_info_resp = {0}; + struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0}; + s32 ret = SXE2_SUCCESS; + + dev_info->pci.bus_devid = pci_dev->addr.devid; + dev_info->pci.bus_function = pci_dev->addr.function; + + ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto l_end; + } + dev_info->pci.serial_number = dev_info_resp.dsn; + + ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret); + goto l_end; + } + dev_info->fw.build_id = dev_fw_info_resp.build_id; + dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id; + dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id; + dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id; + + if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr)) + rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr, + (struct rte_ether_addr *)dev_info->mac.perm_addr); + else + rte_eth_random_addr(dev_info->mac.perm_addr); + +l_end: + return ret; +} + +static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) +{ + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &sxe2_eth_dev_ops; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_hw_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret); + goto l_end; + } + + ret = sxe2_vsi_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); + goto init_vsi_err; + } + + ret = sxe2_dev_info_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret); + goto init_dev_info_err; + } + + goto l_end; + +init_dev_info_err: + sxe2_vsi_uninit(dev); +init_vsi_err: +l_end: + return ret; +} + +static s32 sxe2_dev_uninit(struct rte_eth_dev *dev) +{ + s32 ret = 0; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + goto l_end; + + ret = sxe2_dev_close(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_remove(struct sxe2_common_device *cdev) +{ + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + s32 ret = SXE2_SUCCESS; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) { + PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed"); + goto l_end; + } + + ret = sxe2_dev_uninit(eth_dev); + if (ret) { + PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret); + goto l_end; + } + (void)rte_eth_dev_release_port(eth_dev); + +l_end: + return ret; +} + +static s32 sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev, + struct rte_eth_devargs *req_eth_da __rte_unused, + u16 owner_id __rte_unused, + struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); + struct rte_eth_dev *eth_dev = NULL; + struct sxe2_adapter *adapter = NULL; + s32 ret = SXE2_SUCCESS; + + if (!cdev) { + ret = -EINVAL; + goto l_end; + } + + eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (eth_dev == NULL) { + PMD_LOG_ERR(INIT, "Can not allocate ethdev"); + ret = -ENOMEM; + goto l_end; + } + } else { + if (!eth_dev) { + PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev"); + ret = -EINVAL; + goto l_end; + } + } + + adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev); + adapter->dev_port_id = eth_dev->data->port_id; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + adapter->cdev = cdev; + + ret = sxe2_dev_init(eth_dev, kvargs); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret); + goto l_release_port; + } + + rte_eth_dev_probing_finish(eth_dev); + PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!"); + goto l_end; + +l_release_port: + (void)rte_eth_dev_release_port(eth_dev); +l_end: + return ret; +} + +static s32 sxe2_parse_eth_devargs(struct rte_device *dev, + struct rte_eth_devargs *eth_da) +{ + int ret = 0; + + if (dev->devargs == NULL) + return 0; + + memset(eth_da, 0, sizeof(*eth_da)); + + if (dev->devargs->cls_str) { + ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1); + if (ret != 0) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->cls_str); + return -rte_errno; + } + } + + if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) { + ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s", + dev->devargs->args); + return -rte_errno; + } + } + + return 0; +} + +static s32 sxe2_eth_pmd_probe(struct sxe2_common_device *cdev, struct sxe2_dev_kvargs_info *kvargs) +{ + struct rte_eth_devargs eth_da = { .nb_ports = 0 }; + s32 ret = SXE2_SUCCESS; + + ret = sxe2_parse_eth_devargs(cdev->dev, ð_da); + if (ret != 0) { + ret = -EINVAL; + goto l_end; + } + + ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs); + +l_end: + return ret; +} + +static struct sxe2_class_driver sxe2_eth_pmd = { + .drv_class = SXE2_CLASS_TYPE_ETH, + .name = "SXE2_ETH_PMD_DRIVER_NAME", + .probe = sxe2_eth_pmd_probe, + .remove = sxe2_eth_pmd_remove, + .id_table = pci_id_sxe2_tbl, + .intr_lsc = 1, + .intr_rmv = 1, +}; + +RTE_INIT(rte_sxe2_pmd_init) +{ + sxe2_common_init(); + sxe2_class_driver_register(&sxe2_eth_pmd); +} + +RTE_PMD_EXPORT_NAME(net_sxe2); +RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl); +RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2"); + +RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h new file mode 100644 index 0000000000..412f5d2b14 --- /dev/null +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ +#ifndef __SXE2_ETHDEV_H__ +#define __SXE2_ETHDEV_H__ +#include <rte_compat.h> +#include <rte_kvargs.h> +#include <rte_time.h> +#include <ethdev_driver.h> +#include <ethdev_pci.h> +#include <rte_tm_driver.h> +#include <rte_io.h> + +#include "sxe2_common.h" +#include "sxe2_errno.h" +#include "sxe2_type.h" +#include "sxe2_vsi.h" +#include "sxe2_queue.h" +#include "sxe2_irq.h" +#include "sxe2_osal.h" + +struct sxe2_link_msg { + __le32 speed; + u8 status; +}; + +enum sxe2_fnav_tunnel_flag_type { + SXE2_FNAV_TUN_FLAG_NO_TUNNEL, + SXE2_FNAV_TUN_FLAG_TUNNEL, + SXE2_FNAV_TUN_FLAG_ANY, +}; + +#define SXE2_VF_MAX_NUM 256 +#define SXE2_VSI_MAX_NUM 768 +#define SXE2_FRAME_SIZE_MAX 9832 +#define SXE2_VLAN_TAG_SIZE 4 +#define SXE2_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE) +#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD) + +#ifdef SXE2_TEST +#define SXE2_RESET_ACTIVE_WAIT_COUNT (5) +#else +#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000) +#endif +#define SXE2_NO_ACTIVE_CNT (10) + +#define SXE2_WOKER_DELAY_5MS (5) +#define SXE2_WOKER_DELAY_10MS (10) +#define SXE2_WOKER_DELAY_20MS (20) +#define SXE2_WOKER_DELAY_30MS (30) + +#define SXE2_RESET_DETEC_WAIT_COUNT (100) +#define SXE2_RESET_DONE_WAIT_COUNT (250) +#define SXE2_RESET_WAIT_MS (10) + +#define SXE2_RESET_WAIT_MIN (10) +#define SXE2_RESET_WAIT_MAX (20) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)((n) & 0xffffffff)) + +#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0 +#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2 +#define SXE2_MODULE_TYPE_SFP 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 +#define SXE2_MODULE_SFF_ADDR_MODE 0x04 +#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40 +#define SXE2_MODULE_REVISION_ADDR 0x01 +#define SXE2_MODULE_SFF_8472_COMP 0x5E +#define SXE2_MODULE_SFF_8472_SWAP 0x5C +#define SXE2_MODULE_QSFP_MAX_LEN 640 +#define SXE2_MODULE_SFF_8472_UNSUP 0x0 +#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40 +#define SXE2_MODULE_SFF_SFP_TYPE 0x03 +#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D +#define SXE2_MODULE_TYPE_QSFP28 0x11 + +#define SXE2_MODULE_SFF_8079 0x1 +#define SXE2_MODULE_SFF_8079_LEN 256 +#define SXE2_MODULE_SFF_8472 0x2 +#define SXE2_MODULE_SFF_8472_LEN 512 +#define SXE2_MODULE_SFF_8636 0x3 +#define SXE2_MODULE_SFF_8636_LEN 256 +#define SXE2_MODULE_SFF_8636_MAX_LEN 640 +#define SXE2_MODULE_SFF_8436 0x4 +#define SXE2_MODULE_SFF_8436_LEN 256 +#define SXE2_MODULE_SFF_8436_MAX_LEN 640 + +enum sxe2_wk_type { + SXE2_WK_MONITOR, + SXE2_WK_MONITOR_IM, + SXE2_WK_POST, + SXE2_WK_MBX, +}; + +enum { + SXE2_FLAG_LEGACY_RX_ENABLE = 0, + SXE2_FLAG_LRO_ENABLE = 1, + SXE2_FLAG_RXQ_DISABLED = 2, + SXE2_FLAG_TXQ_DISABLED = 3, + SXE2_FLAG_DRV_REMOVING = 4, + SXE2_FLAG_RESET_DETECTED = 5, + SXE2_FLAG_CORE_RESET_DONE = 6, + SXE2_FLAG_RESET_ACTIVED = 7, + SXE2_FLAG_RESET_PENDING = 8, + SXE2_FLAG_RESET_REQUEST = 9, + SXE2_FLAGS_RESET_PROCESS_DONE = 10, + SXE2_FLAG_RESET_FAILED = 11, + SXE2_FLAG_DRV_PROBE_DONE = 12, + SXE2_FLAG_NETDEV_REGISTED = 13, + SXE2_FLAG_DRV_UP = 15, + SXE2_FLAG_DCB_ENABLE = 16, + SXE2_FLAG_FLTR_SYNC = 17, + + SXE2_FLAG_EVENT_IRQ_DISABLED = 18, + SXE2_FLAG_SUSPEND = 19, + SXE2_FLAG_FNAV_ENABLE = 20, + + SXE2_FLAGS_NBITS +}; + +struct sxe2_link_context { + rte_spinlock_t link_lock; + bool link_up; + u32 speed; +}; + +struct sxe2_devargs { + u8 flow_dup_pattern_mode; + u8 func_flow_direct_en; + u8 fnav_stat_type; + u8 high_performance_mode; + u8 sched_layer_mode; + u8 sw_stats_en; + u8 rx_low_latency; +}; + +#define SXE2_PCI_MAP_BAR_INVALID ((u8)0xff) +#define SXE2_PCI_MAP_INVALID_VAL ((u32)0xffffffff) + +enum sxe2_pci_map_resource { + SXE2_PCI_MAP_RES_INVALID = 0, + SXE2_PCI_MAP_RES_DOORBELL_TX, + SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + SXE2_PCI_MAP_RES_IRQ_DYN, + SXE2_PCI_MAP_RES_IRQ_ITR, + SXE2_PCI_MAP_RES_IRQ_MSIX, + SXE2_PCI_MAP_RES_PTP, + SXE2_PCI_MAP_RES_MAX_COUNT, +}; + +enum sxe2_udp_tunnel_protocol { + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0, + SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE, + SXE2_UDP_TUNNEL_PROTOCOL_GENEVE, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4, + SXE2_UDP_TUNNEL_PROTOCOL_GTP_U, + SXE2_UDP_TUNNEL_PROTOCOL_PFCP, + SXE2_UDP_TUNNEL_PROTOCOL_ECPRI, + SXE2_UDP_TUNNEL_PROTOCOL_MPLS, + SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10, + SXE2_UDP_TUNNEL_PROTOCOL_L2TP, + SXE2_UDP_TUNNEL_PROTOCOL_TEREDO, + SXE2_UDP_TUNNEL_MAX, +}; + +struct sxe2_pci_map_addr_info { + u64 addr_base; + u8 bar_idx; + u8 reg_width; +}; + +struct sxe2_pci_map_segment_info { + enum sxe2_pci_map_resource type; + void __iomem *addr; + resource_size_t page_inner_offset; + resource_size_t len; +}; + +struct sxe2_pci_map_bar_info { + u8 bar_idx; + u8 map_cnt; + struct sxe2_pci_map_segment_info *seg_info; +}; + +struct sxe2_pci_map_context { + u8 bar_cnt; + struct sxe2_pci_map_bar_info *bar_info; + struct sxe2_pci_map_addr_info *addr_info; +}; + +struct sxe2_dev_mac_info { + u8 perm_addr[ETH_ALEN]; +}; + +struct sxe2_pci_info { + u64 serial_number; + u8 bus_devid; + u8 bus_function; + u16 max_vfs; +}; + +struct sxe2_fw_info { + u8 main_version_id; + u8 sub_version_id; + u8 fix_version_id; + u8 build_id; +}; + +struct sxe2_dev_info { + struct rte_eth_dev_data *dev_data; + struct sxe2_pci_info pci; + struct sxe2_fw_info fw; + struct sxe2_dev_mac_info mac; +}; + +enum sxe2_udp_tunnel_status { + SXE2_UDP_TUNNEL_DISABLE = 0x0, + SXE2_UDP_TUNNEL_ENABLE, +}; + +struct sxe2_udp_tunnel_cfg { + u8 protocol; + u8 dev_status; + u16 dev_port; + u16 dev_ref_cnt; + + u16 fw_port; + u8 fw_status; + u8 fw_dst_en; + u8 fw_src_en; + u8 fw_used; +}; + +struct sxe2_udp_tunnel_ctx { + struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX]; + rte_spinlock_t lock; +}; + +struct sxe2_repr_context { + u16 nb_vf; + u16 nb_repr_vf; + struct rte_eth_dev **vf_rep_eth_dev; + struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM]; +}; + +struct sxe2_repr_private_data { + struct rte_eth_dev *rep_eth_dev; + struct sxe2_adapter *parent_adapter; + + struct sxe2_vsi *cp_vsi; + u16 repr_q_id; + + u16 repr_id; + u16 repr_pf_id; + u16 repr_vf_id; + u16 repr_vf_vsi_id; + u16 repr_vf_k_vsi_id; + u16 repr_vf_u_vsi_id; +}; + +struct sxe2_sched_hw_cap { + u32 tm_layers; + u8 root_max_children; + u8 prio_max; + u8 adj_lvl; +}; + +struct sxe2_adapter { + struct sxe2_common_device *cdev; + struct sxe2_dev_info dev_info; + struct rte_pci_device *pci_dev; + struct sxe2_repr_private_data *repr_priv_data; + struct sxe2_pci_map_context map_ctxt; + struct sxe2_irq_context irq_ctxt; + struct sxe2_queue_context q_ctxt; + struct sxe2_vsi_context vsi_ctxt; + struct sxe2_devargs devargs; + u16 dev_port_id; + u64 cap_flags; + enum sxe2_dev_type dev_type; + u32 ptype_tbl[SXE2_MAX_PTYPE_NUM]; + struct rte_ether_addr mac_addr; + u8 port_idx; + u8 pf_idx; + u32 tx_mode_flags; + u32 rx_mode_flags; + u8 started; +}; + +#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ + ((struct sxe2_adapter *)(dev)->data->dev_private) + +#endif diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h new file mode 100644 index 0000000000..7695a0206f --- /dev/null +++ b/drivers/net/sxe2/sxe2_irq.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_IRQ_H__ +#define __SXE2_IRQ_H__ + +#include <ethdev_driver.h> + +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_IRQ_MAX_CNT 2048 + +#define SXE2_LAN_MSIX_MIN_CNT 1 + +#define SXE2_EVENT_IRQ_IDX 0 + +#define SXE2_MAX_INTR_QUEUE_NUM 256 + +#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16) + +#define SXE2_ITR_1000K 1 +#define SXE2_ITR_500K 2 +#define SXE2_ITR_50K 20 + +#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K) +#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K) + +struct sxe2_fwc_msix_caps; +struct sxe2_adapter; + +struct sxe2_irq_context { + struct rte_intr_handle *reset_handle; + s32 reset_event_fd; + s32 other_event_fd; + + u16 max_cnt_hw; + u16 base_idx_in_func; + + u16 rxq_avail_cnt; + u16 rxq_base_idx_in_pf; + + u16 rxq_irq_cnt; + u32 *rxq_msix_idx; + s32 *rxq_event_fd; +}; + +#endif diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c new file mode 100644 index 0000000000..98343679f6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.c @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps) +{ + adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt; + adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf; +} + +s32 sxe2_queues_init(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + u16 buf_size; + u16 frame_size; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + + frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD; + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq) + continue; + + buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE); + if (frame_size > rxq->rx_buf_len) + dev->data->scattered_rx = 1; + } + + return ret; +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h new file mode 100644 index 0000000000..7fa22e2820 --- /dev/null +++ b/drivers/net/sxe2/sxe2_queue.h @@ -0,0 +1,191 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_QUEUE_H__ +#define __SXE2_QUEUE_H__ +#include <rte_ethdev.h> +#include <rte_io.h> +#include <rte_stdatomic.h> +#include <ethdev_driver.h> + +#include "sxe2_drv_cmd.h" +#include "sxe2_txrx_common.h" + +#define SXE2_PCI_REG_READ(reg) \ + rte_read32(reg) +#define SXE2_PCI_REG_WRITE_WC(reg, value) \ + rte_write32_wc((rte_cpu_to_le_32(value)), reg) +#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \ + rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) + +struct sxe2_queue_context { + u16 qp_cnt_assign; + u16 base_idx_in_pf; + + u32 tx_mode_flags; + u32 rx_mode_flags; +}; + +struct sxe2_tx_buffer { + struct rte_mbuf *mbuf; + + u16 next_id; + u16 last_id; +}; + +struct sxe2_tx_buffer_vec { + struct rte_mbuf *mbuf; +}; + +struct sxe2_txq_stats { + u64 tx_restart; + u64 tx_busy; + + u64 tx_linearize; + u64 tx_tso_linearize_chk; + u64 tx_vlan_insert; + u64 tx_tso_packets; + u64 tx_tso_bytes; + u64 tx_csum_none; + u64 tx_csum_partial; + u64 tx_csum_partial_inner; + u64 tx_queue_dropped; + u64 tx_xmit_more; + u64 tx_pkts_num; + u64 tx_desc_not_done; +}; + +struct sxe2_tx_queue; +struct sxe2_txq_ops { + void (*queue_reset)(struct sxe2_tx_queue *txq); + void (*mbufs_release)(struct sxe2_tx_queue *txq); + void (*buffer_ring_free)(struct sxe2_tx_queue *txq); +}; +struct sxe2_tx_queue { + volatile union sxe2_tx_data_desc *desc_ring; + struct sxe2_tx_buffer *buffer_ring; + volatile u32 *tdt_reg_addr; + + u64 offloads; + u16 ring_depth; + u16 desc_free_num; + + u16 free_thresh; + + u16 rs_thresh; + u16 next_use; + u16 next_clean; + + u16 desc_used_num; + u16 next_dd; + u16 next_rs; + u16 ipsec_pkt_md_offset; + + u16 port_id; + u16 queue_id; + u16 idx_in_func; + bool tx_deferred_start; + u8 pthresh; + u8 hthresh; + u8 wthresh; + u16 reg_idx; + u64 base_addr; + struct sxe2_vsi *vsi; + const struct rte_memzone *mz; + struct sxe2_txq_ops ops; + u8 vlan_flag; + u8 use_ctx:1, + res:7; +}; +struct sxe2_rx_queue; +struct sxe2_rxq_ops { + void (*queue_reset)(struct sxe2_rx_queue *rxq); + void (*mbufs_release)(struct sxe2_rx_queue *txq); +}; +struct sxe2_rxq_stats { + u64 rx_pkts_num; + u64 rx_rss_pkt_num; + u64 rx_fnav_pkt_num; + u64 rx_ptp_pkt_num; + u32 rx_vec_align_drop; + + u32 rxdid_1588_err; + u32 ip_csum_err; + u32 l4_csum_err; + u32 outer_ip_csum_err; + u32 outer_l4_csum_err; + u32 macsec_err; + u32 ipsec_err; + + u64 ptype_pkts[SXE2_MAX_PTYPE_NUM]; +}; + +struct sxe2_rxq_sw_stats { + RTE_ATOMIC(uint64_t)pkts; + RTE_ATOMIC(uint64_t)bytes; + RTE_ATOMIC(uint64_t)drop_pkts; + RTE_ATOMIC(uint64_t)drop_bytes; + RTE_ATOMIC(uint64_t)unicast_pkts; + RTE_ATOMIC(uint64_t)multicast_pkts; + RTE_ATOMIC(uint64_t)broadcast_pkts; +}; + +struct sxe2_rx_queue { + volatile union sxe2_rx_desc *desc_ring; + volatile u32 *rdt_reg_addr; + struct rte_mempool *mb_pool; + struct rte_mbuf **buffer_ring; + struct sxe2_vsi *vsi; + + u64 offloads; + u16 ring_depth; + u16 rx_free_thresh; + u16 processing_idx; + u16 hold_num; + u16 next_ret_pkt; + u16 batch_alloc_trigger; + u16 completed_pkts_num; + u64 update_time; + u32 desc_ts; + u64 ts_high; + u32 ts_low; + u32 ts_need_update; + u8 crc_len; + bool fnav_enable; + + struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM]; + + struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2]; + struct rte_mbuf *pkt_first_seg; + struct rte_mbuf *pkt_last_seg; + u64 mbuf_init_value; + u16 realloc_num; + u16 realloc_start; + struct rte_mbuf fake_mbuf; + + const struct rte_memzone *mz; + struct sxe2_rxq_ops ops; + rte_iova_t base_addr; + u16 reg_idx; + u32 low_desc_waterline : 16; + u32 ldw_event_pending : 1; + struct sxe2_rxq_sw_stats sw_stats; + u16 port_id; + u16 queue_id; + u16 idx_in_func; + u16 rx_buf_len; + u16 rx_hdr_len; + u16 max_pkt_len; + bool rx_deferred_start; + u8 drop_en; +}; + +struct sxe2_adapter; + +void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_queue_caps *q_caps); + +s32 sxe2_queues_init(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h new file mode 100644 index 0000000000..7284cea4b6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_common.h @@ -0,0 +1,541 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_COMMON_H_ +#define _SXE2_TXRX_COMMON_H_ +#include <stdbool.h> +#include "sxe2_type.h" + +#define SXE2_ALIGN_RING_DESC 32 +#define SXE2_MIN_RING_DESC 64 +#define SXE2_MAX_RING_DESC 4096 + +#define SXE2_VECTOR_PATH 0 +#define SXE2_VECTOR_OFFLOAD_PATH 1 +#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2 + +#define SXE2_MAX_PTYPE_NUM 1024 +#define SXE2_MIN_BUF_SIZE 1024 + +#define SXE2_ALIGN 32 +#define SXE2_DESC_ADDR_ALIGN 128 + +#define SXE2_MIN_TSO_MSS 88 +#define SXE2_MAX_TSO_MSS 9728 + +#define SXE2_TX_MTU_SEG_MAX 15 + +#define SXE2_TX_MIN_PKT_LEN 17 +#define SXE2_TX_MAX_BURST 32 +#define SXE2_TX_MAX_FREE_BUF 64 +#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024) + +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 + +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + +#define SXE2_TX_PKTS_BURST_BATCH_NUM 32 + +union sxe2_tx_offload_info { + u64 data; + struct { + u64 l2_len:7; + u64 l3_len:9; + u64 l4_len:8; + u64 tso_segsz:16; + u64 outer_l2_len:8; + u64 outer_l3_len:16; + }; +}; + +#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_SEC_OFFLOAD | \ + RTE_MBUF_F_TX_IEEE1588_TMST) + +#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM) + +struct sxe2_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 ipsec_offset; + __le64 type_cmd_tso_mss; +}; + +#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2 +#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9 +#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23 + +#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4 +#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11 +#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12 +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13 +#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16 +#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30 +#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50 +#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50 + +#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT) + +#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT) +#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT) + +enum sxe2_tx_ctxt_desc_eipt_bits { + SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0, + SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1, + SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2, + SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3, +}; + +enum sxe2_tx_ctxt_desc_l4tunt_bits { + SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, + SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT, +}; + +enum sxe2_tx_ctxt_desc_cmd_bits { + SXE2_TX_CTXT_DESC_CMD_TSO = 0x01, + SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04, + SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08, + SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00, + SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10, + SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20, + SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30, + SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40 +}; +#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT) +#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT) +#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \ + (((u64)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT) + +union sxe2_tx_data_desc { + struct { + __le64 buf_addr; + __le64 type_cmd_off_bsz_l2t; + } read; + struct { + __le64 rsvd; + __le64 dd; + } wb; +}; + +#define SXE2_TX_DATA_DESC_CMD_SHIFT 4 +#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16 +#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34 +#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48 + +#define SXE2_TX_DATA_DESC_CMD_MASK \ + (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT) +#define SXE2_TX_DATA_DESC_OFFSET_MASK \ + (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT) +#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \ + (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) +#define SXE2_TX_DATA_DESC_L2TAG1_MASK \ + (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT) + +#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0) +#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7) +#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14) + +#define SXE2_TX_DESC_DTYPE_MASK 0xF +#define SXE2_TX_DATA_DESC_MACLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_MASK \ + (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_MASK \ + (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \ + (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT) +#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT) +#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \ + (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +enum sxe2_tx_desc_type { + SXE2_TX_DESC_DTYPE_DATA = 0x0, + SXE2_TX_DESC_DTYPE_CTXT = 0x1, + SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8, + SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF, +}; + +enum sxe2_tx_data_desc_cmd_bits { + SXE2_TX_DATA_DESC_CMD_EOP = 0x0001, + SXE2_TX_DATA_DESC_CMD_RS = 0x0002, + SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004, + SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008, + SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040, + SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200, + SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300, + SXE2_TX_DATA_DESC_CMD_RE = 0x0400 +}; +#define SXE2_TX_DATA_DESC_CMD_RS_MASK \ + (((u64)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT) + +#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL + +#define SXE2_TX_DESC_RING_ALIGN \ + (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc)) + +#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF + +#define SXE2_TX_FILL_PER_LOOP 4 +#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1) +#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64) + +#define SXE2_RX_MAX_BURST 32 +#define SXE2_RING_SIZE_MIN 1024 +#define SXE2_RX_MAX_NSEG 2 + +#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST +#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST + +#define SXE2_RXQ_CTX_DBUFF_SHIFT 7 + +#define SXE2_RX_NUM_PER_LOOP 8 + +#define SXE2_RX_FLEX_DESC_PTYPE_S (16) +#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL) + +#define SXE2_RX_HBUF_LEN_UNIT 6 +#define SXE2_RX_LDW_LEN_UNIT 6 +#define SXE2_RX_DBUF_LEN_UNIT 7 +#define SXE2_RX_DBUF_LEN_MASK (~0x7F) + +#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200 + +#define SXE2_RX_VECTOR_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH | \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP) + +#define SXE2_DEFAULT_RX_FREE_THRESH 32 +#define SXE2_DEFAULT_RX_PTHRESH 8 +#define SXE2_DEFAULT_RX_HTHRESH 8 +#define SXE2_DEFAULT_RX_WTHRESH 0 + +#define SXE2_DEFAULT_TX_FREE_THRESH 32 +#define SXE2_DEFAULT_TX_PTHRESH 32 +#define SXE2_DEFAULT_TX_HTHRESH 0 +#define SXE2_DEFAULT_TX_WTHRESH 0 +#define SXE2_DEFAULT_TX_RSBIT_THRESH 32 + +#define SXE2_RX_SEG_NUM 2 + +#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC +#define sxe2_rx_desc sxe2_rx_16b_desc +#else +#define sxe2_rx_desc sxe2_rx_32b_desc +#endif + +union sxe2_rx_16b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + } wb; +}; + +union sxe2_rx_32b_desc { + struct { + __le64 pkt_addr; + __le64 hdr_addr; + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + u8 rxdid_src; + u8 mirror; + __le16 l2tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 status_lrocnt_fdpf_id; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + u8 acl_pf_id; + u8 sw_pf_id; + __le16 flow_id; + + __le32 fd_filter_id; + + } wb; + struct { + u8 rxdid_src_fd_eudpe; + u8 mirror; + __le16 l2_tag1; + __le32 filter_status; + + __le64 status_err_ptype_len; + + __le32 ext_status_ts_low; + __le16 l2tag2_1st; + __le16 l2tag2_2nd; + + __le32 ts_h; + __le32 fd_filter_id; + + } wb_ts; +}; + +enum sxe2_rx_lro_desc_max_num { + SXE2_RX_LRO_DESC_MAX_1 = 1, + SXE2_RX_LRO_DESC_MAX_4 = 4, + SXE2_RX_LRO_DESC_MAX_8 = 8, + SXE2_RX_LRO_DESC_MAX_16 = 16, + SXE2_RX_LRO_DESC_MAX_32 = 32, + SXE2_RX_LRO_DESC_MAX_48 = 48, + SXE2_RX_LRO_DESC_MAX_64 = 64, + SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64, +}; + +enum sxe2_rx_desc_rxdid { + SXE2_RX_DESC_RXDID_16B = 0, + SXE2_RX_DESC_RXDID_32B, + SXE2_RX_DESC_RXDID_1588, + SXE2_RX_DESC_RXDID_FD, +}; + +#define SXE2_RX_DESC_RXDID_SHIFT (0) +#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT) +#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT) + +#define SXE2_RX_DESC_PKT_SRC_SHIFT (3) +#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT) +#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT) + +#define SXE2_RX_DESC_FD_VLD_SHIFT (5) +#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT) +#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT) + +#define SXE2_RX_DESC_EUDPE_SHIFT (6) +#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT) +#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT) + +#define SXE2_RX_DESC_UDP_NET_SHIFT (7) +#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT) +#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \ + (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT) + +#define SXE2_RX_DESC_MIRR_ID_SHIFT (0) +#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT) +#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT) + +#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6) +#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT) +#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \ + (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT) + +#define SXE2_RX_DESC_PKT_LEN_SHIFT (32) +#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT) +#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT) + +#define SXE2_RX_DESC_HDR_LEN_SHIFT (46) +#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT) +#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT) + +#define SXE2_RX_DESC_SPH_SHIFT (57) +#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT) +#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT) + +#define SXE2_RX_DESC_PTYPE_SHIFT (16) +#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT) +#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL) +#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT) + +#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32) +#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL) + +#define SXE2_RX_DESC_LROCNT_SHIFT (0) +#define SXE2_RX_DESC_LROCNT_MASK (0xF) + +enum sxe2_rx_desc_status_shift { + SXE2_RX_DESC_STATUS_DD_SHIFT = 0, + SXE2_RX_DESC_STATUS_EOP_SHIFT = 1, + SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2, + + SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3, + SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4, + SXE2_RX_DESC_STATUS_SECP_SHIFT = 5, + SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6, + SXE2_RX_DESC_STATUS_SECE_SHIFT = 26, + SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27, + SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28, + SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30, + SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59, + SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60, + SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61, + SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62, + SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63, +}; + +#define SXE2_RX_DESC_STATUS_DD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT) +#define SXE2_RX_DESC_STATUS_EOP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT) +#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT) +#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT) +#define SXE2_RX_DESC_STATUS_CRCP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECP_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT) +#define SXE2_RX_DESC_STATUS_SECTAG_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT) +#define SXE2_RX_DESC_STATUS_SECE_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT) +#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT) +#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) +#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \ + (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT) +#define SXE2_RX_DESC_STATUS_LPBK_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT) +#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT) +#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT) +#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT) +#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \ + (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT) + +enum sxe2_rx_desc_umbcast_val { + SXE2_RX_DESC_STATUS_UNICAST = 0, + SXE2_RX_DESC_STATUS_MUTICAST = 1, + SXE2_RX_DESC_STATUS_BOARDCAST = 2, +}; + +#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \ + (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT) + +enum sxe2_rx_desc_error_shift { + SXE2_RX_DESC_ERROR_RXE_SHIFT = 7, + SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8, + SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9, + + SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10, + + SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11, + + SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12, + SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13, + SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14, +}; + +#define SXE2_RX_DESC_ERROR_RXE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT) +#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT) +#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT) +#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \ + (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT) + +#define SXE2_RX_DESC_QW1_ERRORS_MASK \ + (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \ + SXE2_RX_DESC_ERROR_CSUM_EIP_MASK) + +enum sxe2_rx_desc_ext_status_shift { + SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4, + SXE2_RX_DESC_EXT_STATUS_RSVD = 5, + SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7, + SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13, +}; +#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \ + (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT) +#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \ + (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT) + +enum sxe2_rx_desc_ipsec_shift { + SXE2_RX_DESC_IPSEC_PKT_S = 21, + SXE2_RX_DESC_IPSEC_ENGINE_S = 22, + SXE2_RX_DESC_IPSEC_MODE_S = 23, + SXE2_RX_DESC_IPSEC_STATUS_S = 24, + + SXE2_RX_DESC_IPSEC_LAST +}; + +enum sxe2_rx_desc_ipsec_status { + SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0, + SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1, + SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2, + SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3, + SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4, + SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5, + SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6, + SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7, +}; + +#define SXE2_RX_DESC_IPSEC_PKT_MASK \ + (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S) +#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7) +#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \ + (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \ + SXE2_RX_DESC_IPSEC_STATUS_MASK) + +#define SXE2_RX_ERR_BITS 0x3f + +#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4 + +#define SXE2_RX_DESC_RING_ALIGN \ + (SXE2_ALIGN / sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_RING_SIZE \ + ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc)) + +#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h new file mode 100644 index 0000000000..4924b0f41f --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_POLL_H +#define SXE2_TXRX_POLL_H + +#include "sxe2_queue.h" + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); + +#endif diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c new file mode 100644 index 0000000000..e1e0e279cd --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.c @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_os.h> +#include <rte_tailq.h> +#include <rte_malloc.h> +#include "sxe2_ethdev.h" +#include "sxe2_vsi.h" +#include "sxe2_common_log.h" +#include "sxe2_cmd_chnl.h" + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps) +{ + adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id; + adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id; + adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type; +} + +static struct sxe2_vsi * +sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0); + if (vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct."); + goto l_end; + } + vsi->adapter = adapter; + + vsi->vsi_id = vsi_id; + vsi->vsi_type = vsi_type; + +l_end: + return vsi; +} + +static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, u16 num_queues, u16 base_idx) +{ + vsi->txqs.q_cnt = num_queues; + vsi->rxqs.q_cnt = num_queues; + vsi->txqs.base_idx_in_func = base_idx; + vsi->rxqs.base_idx_in_func = base_idx; +} + +static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi) +{ + vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC; + vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC; + + PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.", + vsi->vsi_id, vsi->txqs.q_cnt, + vsi->txqs.depth, vsi->rxqs.depth); +} + +static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, u16 num_irqs, u16 base_idx) +{ + vsi->irqs.avail_cnt = num_irqs; + vsi->irqs.base_idx_in_pf = base_idx; +} + +static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter, u16 vsi_id, u16 vsi_type) +{ + struct sxe2_vsi *vsi = NULL; + u16 num_queues = 0; + u16 queue_base_idx = 0; + u16 num_irqs = 0; + u16 irq_base_idx = 0; + + vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type); + if (vsi == NULL) + goto l_end; + + if (vsi_type == SXE2_VSI_T_DPDK_PF || + vsi_type == SXE2_VSI_T_DPDK_VF) { + num_queues = adapter->q_ctxt.qp_cnt_assign; + queue_base_idx = adapter->q_ctxt.base_idx_in_pf; + + num_irqs = adapter->irq_ctxt.max_cnt_hw; + irq_base_idx = adapter->irq_ctxt.base_idx_in_func; + } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) { + num_queues = 1; + num_irqs = 1; + } + + sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx); + + sxe2_vsi_queues_cfg(vsi); + + sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx); + +l_end: + return vsi; +} + +static void sxe2_vsi_node_free(struct sxe2_vsi *vsi) +{ + if (!vsi) + return; + + rte_free(vsi); + vsi = NULL; +} + +static s32 sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi) +{ + s32 ret = SXE2_SUCCESS; + + if (vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) { + ret = sxe2_drv_vsi_del(adapter, vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + if (ret == -EPERM) + goto l_free; + goto l_end; + } + } + +l_free: + rte_free(vsi); + vsi = NULL; + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); +l_end: + return ret; +} + +static s32 sxe2_main_vsi_create(struct sxe2_adapter *adapter) +{ + s32 ret = SXE2_SUCCESS; + u16 vsi_id = adapter->vsi_ctxt.dpdk_vsi_id; + u16 vsi_type = adapter->vsi_ctxt.vsi_type; + bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID); + + PMD_INIT_FUNC_TRACE(); + + if (!is_reused) + vsi_type = SXE2_VSI_T_DPDK_PF; + else + PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id); + + adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type); + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret); + ret = -ENOMEM; + goto l_end; + } + + if (!is_reused) { + ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret); + goto l_free_vsi; + } + + adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id; + PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI"); + } + + goto l_end; + +l_free_vsi: + sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi); + adapter->vsi_ctxt.main_vsi = NULL; +l_end: + return ret; +} + +s32 sxe2_vsi_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret = 0; + + PMD_INIT_FUNC_TRACE(); + + ret = sxe2_main_vsi_create(adapter); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +void sxe2_vsi_uninit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + + if (adapter->vsi_ctxt.main_vsi == NULL) { + PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy."); + goto l_end; + } + + ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi); + if (ret) { + PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret); + goto l_end; + } + + PMD_LOG_DEBUG(DRV, "vsi destroyed."); + +l_end: + return; +} diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h new file mode 100644 index 0000000000..8870cbe22d --- /dev/null +++ b/drivers/net/sxe2/sxe2_vsi.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __sxe2_VSI_H__ +#define __sxe2_VSI_H__ +#include <rte_os.h> +#include "sxe2_type.h" +#include "sxe2_drv_cmd.h" + +#define SXE2_MAX_BOND_MEMBER_CNT 4 + +enum sxe2_drv_type { + SXE2_MAX_DRV_TYPE_DPDK = 0, + SXE2_MAX_DRV_TYPE_KERNEL, + SXE2_MAX_DRV_TYPE_CNT, +}; + +#define SXE2_MAX_USER_PRIORITY (8) + +#define SXE2_DFLT_NUM_RX_DESC 512 +#define SXE2_DFLT_NUM_TX_DESC 512 + +#define SXE2_DFLT_Q_NUM_OTHER_VSI 1 +#define SXE2_INVALID_VSI_ID 0xFFFF + +struct sxe2_adapter; +struct sxe2_drv_vsi_caps; +struct rte_eth_dev; + +enum sxe2_vsi_type { + SXE2_VSI_T_PF = 0, + SXE2_VSI_T_VF, + SXE2_VSI_T_CTRL, + SXE2_VSI_T_LB, + SXE2_VSI_T_MACVLAN, + SXE2_VSI_T_ESW, + SXE2_VSI_T_RDMA, + SXE2_VSI_T_DPDK_PF, + SXE2_VSI_T_DPDK_VF, + SXE2_VSI_T_DPDK_ESW, + SXE2_VSI_T_NR, +}; + +struct sxe2_queue_info { + u16 base_idx_in_nic; + u16 base_idx_in_func; + u16 q_cnt; + u16 depth; + u16 rx_buf_len; + u16 max_frame_len; + struct sxe2_queue **queues; +}; + +struct sxe2_vsi_irqs { + u16 avail_cnt; + u16 used_cnt; + u16 base_idx_in_pf; +}; + +enum { + sxe2_VSI_DOWN = 0, + sxe2_VSI_CLOSE, + sxe2_VSI_DISABLE, + sxe2_VSI_MAX, +}; + +struct sxe2_stats { + u64 ipackets; + + u64 opackets; + + u64 ibytes; + + u64 obytes; + + u64 ierrors; + + u64 imissed; + + u64 rx_out_of_buffer; + u64 rx_qblock_drop; + + u64 tx_frame_good; + u64 rx_frame_good; + u64 rx_crc_errors; + u64 tx_bytes_good; + u64 rx_bytes_good; + u64 tx_multicast_good; + u64 tx_broadcast_good; + u64 rx_multicast_good; + u64 rx_broadcast_good; + u64 rx_len_errors; + u64 rx_out_of_range_errors; + u64 rx_oversize_pkts_phy; + u64 rx_symbol_err; + u64 rx_pause_frame; + u64 tx_pause_frame; + + u64 rx_discards_phy; + u64 rx_discards_ips_phy; + + u64 tx_dropped_link_down; + u64 rx_undersize_good; + u64 rx_runt_error; + u64 tx_bytes_good_bad; + u64 tx_frame_good_bad; + u64 rx_jabbers; + u64 rx_size_64; + u64 rx_size_65_127; + u64 rx_size_128_255; + u64 rx_size_256_511; + u64 rx_size_512_1023; + u64 rx_size_1024_1522; + u64 rx_size_1523_max; + u64 rx_pcs_symbol_err_phy; + u64 rx_corrected_bits_phy; + u64 rx_err_lane_0_phy; + u64 rx_err_lane_1_phy; + u64 rx_err_lane_2_phy; + u64 rx_err_lane_3_phy; + + u64 rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY]; + u64 rx_illegal_bytes; + u64 rx_oversize_good; + u64 tx_unicast; + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_vlan_packet_good; + u64 tx_size_64; + u64 tx_size_65_127; + u64 tx_size_128_255; + u64 tx_size_256_511; + u64 tx_size_512_1023; + u64 tx_size_1024_1522; + u64 tx_size_1523_max; + u64 tx_underflow_error; + u64 rx_byte_good_bad; + u64 rx_frame_good_bad; + u64 rx_unicast_good; + u64 rx_vlan_packets; + + u64 prio_xoff_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_rx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xoff_tx[SXE2_MAX_USER_PRIORITY]; + u64 prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY]; + + u64 rx_vsi_unicast_packets; + u64 rx_vsi_bytes; + u64 tx_vsi_unicast_packets; + u64 tx_vsi_bytes; + u64 rx_vsi_multicast_packets; + u64 tx_vsi_multicast_packets; + u64 rx_vsi_broadcast_packets; + u64 tx_vsi_broadcast_packets; + + u64 rx_sw_unicast_packets; + u64 rx_sw_broadcast_packets; + u64 rx_sw_multicast_packets; + u64 rx_sw_drop_packets; + u64 rx_sw_drop_bytes; +}; + +struct sxe2_vsi_stats { + struct sxe2_stats vsi_sw_stats; + struct sxe2_stats vsi_sw_stats_prev; + struct sxe2_stats vsi_hw_stats; + struct sxe2_stats stats; +}; + +struct sxe2_vsi { + TAILQ_ENTRY(sxe2_vsi) next; + struct sxe2_adapter *adapter; + u16 vsi_id; + u16 vsi_type; + struct sxe2_vsi_irqs irqs; + struct sxe2_queue_info txqs; + struct sxe2_queue_info rxqs; + u16 budget; + struct sxe2_vsi_stats vsi_stats; +}; + +TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi); + +struct sxe2_vsi_context { + u16 func_id; + u16 dpdk_vsi_id; + u16 kernel_vsi_id; + u16 vsi_type; + + u16 bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + u16 bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT]; + + struct sxe2_vsi *main_vsi; +}; + +void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter, + struct sxe2_drv_vsi_caps *vsi_caps); + +s32 sxe2_vsi_init(struct rte_eth_dev *dev); + +void sxe2_vsi_uninit(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 06/10] drivers: support PCI BAR mapping 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (4 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 05/10] drivers: add base driver probe skeleton liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 ` (4 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement PCI BAR (Base Address Register) mapping and unmapping logic to enable MMIO (Memory Mapped I/O) access to hardware registers. The driver retrieves the BAR0 virtual address from the PCI resource during the probing phase. This mapping is used for subsequent register-level operations. Proper cleanup is implemented in the device close path. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_ioctl_chnl.c | 40 +++- drivers/net/sxe2/sxe2_ethdev.c | 307 ++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_ethdev.h | 18 ++ 3 files changed, 362 insertions(+), 3 deletions(-) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 4b041765de..80fccc6a11 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -160,6 +160,40 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev) return ret; } +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap) +void +*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, u8 bar_idx, u64 len, u64 offset) +{ + s32 cmd_fd = 0; + void *virt = NULL; + + if (cdev->config.kernel_reset) { + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_err; + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_err; + } + + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + + virt = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); + if (virt == MAP_FAILED) { + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + cmd_fd, len, offset, strerror(errno)); + goto l_err; + } + + return virt; +l_err: + return NULL; +} + RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap) s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) @@ -167,8 +201,8 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) s32 ret = SXE2_SUCCESS; if (cdev->config.kernel_reset) { - ret = SXE2_ERR_PERM; - PMD_LOG_WARN(COM, "kernel reset, need restart app."); + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reseted, need restart app."); goto l_end; } @@ -179,7 +213,7 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) if (ret < 0) { PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", virt, len, strerror(errno)); - ret = SXE2_ERR_IO; + ret = -EIO; goto l_end; } diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index a6cb51789e..4836c338bc 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -54,6 +54,21 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { { .vendor_id = 0, }, }; +static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { + /* SXE2_PCI_MAP_RES_INVALID */ + {0, 0, 0}, + /* SXE2_PCI_MAP_RES_DOORBELL_TX */ + { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ + { SXE2_RXQ_TAIL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_DYN */ + { SXE2_VF_DYN_CTL(0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ + { SXE2_VF_INT_ITR(0, 0), 0, 4}, + /* SXE2_PCI_MAP_RES_IRQ_MSIX */ + { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, +}; + static s32 sxe2_dev_configure(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -151,6 +166,7 @@ static s32 sxe2_dev_close(struct rte_eth_dev *dev) (void)sxe2_dev_stop(dev); sxe2_vsi_uninit(dev); + sxe2_dev_pci_map_uinit(dev); return SXE2_SUCCESS; } @@ -287,6 +303,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_infos_get = sxe2_dev_infos_get, }; +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 bar_idx = SXE2_PCI_MAP_BAR_INVALID; + u8 i; + + bar_idx = map_ctxt->addr_info[res_type].bar_idx; + if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + goto l_end; + } + + for (i = 0; i < map_ctxt->bar_cnt; i++) { + if (bar_idx == map_ctxt->bar_info[i].bar_idx) { + bar_info = &map_ctxt->bar_info[i]; + break; + } + } + +l_end: + return bar_info; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { @@ -354,6 +395,67 @@ static s32 sxe2_dev_caps_get(struct sxe2_adapter *adapter) return ret; } +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + void *map_addr = NULL; + s32 ret = SXE2_SUCCESS; + size_t page_size = 0; + size_t aligned_len = 0; + size_t page_inner_offset = 0; + off_t aligned_offset = 0; + u8 i = 0; + + if (org_len == 0) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0"); + ret = -EFAULT; + goto l_end; + } + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type); + ret = -EFAULT; + goto l_end; + } + seg_info = bar_info->seg_info; + + page_size = rte_mem_page_size(); + + aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size); + page_inner_offset = org_offset - aligned_offset; + aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size); + + map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); + if (!map_addr) { + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", + res_type, org_len, page_size); + ret = -EFAULT; + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID) + continue; + seg_info[i].type = res_type; + seg_info[i].addr = map_addr; + seg_info[i].page_inner_offset = page_inner_offset; + seg_info[i].len = aligned_len; + break; + } + if (i == bar_info->map_cnt) { + PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type); + ret = -ENOMEM; + sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len); + goto l_end; + } + +l_end: + return ret; +} + static s32 sxe2_hw_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); @@ -368,6 +470,54 @@ static s32 sxe2_hw_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base) +{ + struct sxe2_pci_map_addr_info *addr_info = NULL; + s32 ret = SXE2_SUCCESS; + + addr_info = &adapter->map_ctxt.addr_info[res_type]; + if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) { + PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type); + ret = -EFAULT; + goto l_end; + } + + ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width, + addr_info->addr_base + item_base * addr_info->reg_width); + if (ret != SXE2_SUCCESS) { + PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type); + goto l_end; + } +l_end: + return ret; +} + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type) +{ + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u32 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + for (i = 0; i < bar_info->map_cnt; i++) { + if (res_type == seg_info[i].type) { + (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr, seg_info[i].len); + memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info)); + break; + } + } + +l_end: + return; +} + static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = @@ -408,6 +558,157 @@ static s32 sxe2_dev_info_init(struct rte_eth_dev *dev) return ret; } +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + struct sxe2_pci_map_segment_info *seg_info = NULL; + u16 txq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 txq_base = adapter->q_ctxt.base_idx_in_pf; + u16 rxq_cnt = adapter->q_ctxt.qp_cnt_assign; + u16 irq_cnt = adapter->irq_ctxt.max_cnt_hw; + u16 irq_base = adapter->irq_ctxt.base_idx_in_func; + u16 rxq_base = adapter->q_ctxt.base_idx_in_pf; + s32 ret = SXE2_SUCCESS; + + PMD_INIT_FUNC_TRACE(); + + adapter->dev_info.dev_data = dev->data; + + if (!pci_dev->mem_resource[0].phys_addr) { + PMD_LOG_ERR(INIT, "Physical address not scanned"); + ret = -ENXIO; + goto l_end; + } + + map_ctxt->bar_cnt = 2; + + bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0); + if (!bar_info) { + PMD_LOG_ERR(INIT, "Failed to alloc bar_info"); + ret = -ENOMEM; + goto l_end; + } + bar_info[0].bar_idx = 0; + bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = -ENOMEM; + goto l_free_bar; + } + + bar_info[0].seg_info = seg_info; + + bar_info[1].bar_idx = 4; + bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT; + seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0); + if (!seg_info) { + PMD_LOG_ERR(INIT, "Failed to alloc seg_info"); + ret = -ENOMEM; + goto l_free_seg0; + } + + bar_info[1].seg_info = seg_info; + map_ctxt->bar_info = bar_info; + + map_ctxt->addr_info = sxe2_net_map_addr_info_pf; + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, + txq_cnt, txq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); + goto l_free_seg1; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, + rxq_cnt, rxq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret); + goto l_free_txq; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret); + goto l_free_rxq_tail; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret); + goto l_free_irq_dyn; + } + + ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX, + irq_cnt, irq_base); + if (!ret) { + PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret); + goto l_free_irq_itr; + } + goto l_end; + +l_free_irq_itr: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); +l_free_irq_dyn: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); +l_free_rxq_tail: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); +l_free_txq: + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); +l_free_seg1: + if (bar_info[1].seg_info) { + rte_free(bar_info[1].seg_info); + bar_info[1].seg_info = NULL; + } +l_free_seg0: + if (bar_info[0].seg_info) { + rte_free(bar_info[0].seg_info); + bar_info[0].seg_info = NULL; + } +l_free_bar: + if (bar_info) { + rte_free(bar_info); + bar_info = NULL; + } +l_end: + return ret; +} + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_bar_info *bar_info = NULL; + u8 i = 0; + + PMD_INIT_FUNC_TRACE(); + + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR); + (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX); + + if (map_ctxt != NULL && map_ctxt->bar_info != NULL) { + for (i = 0; i < map_ctxt->bar_cnt; i++) { + bar_info = &map_ctxt->bar_info[i]; + if (bar_info != NULL && bar_info->seg_info != NULL) { + rte_free(bar_info->seg_info); + bar_info->seg_info = NULL; + } + } + rte_free(map_ctxt->bar_info); + map_ctxt->bar_info = NULL; + } + + adapter->dev_info.dev_data = NULL; +} + static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *kvargs __rte_unused) { s32 ret = 0; @@ -425,6 +726,12 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k goto l_end; } + ret = sxe2_dev_pci_map_init(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret); + goto l_end; + } + ret = sxe2_vsi_init(dev); if (ret) { PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret); diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 412f5d2b14..698e2ee4a2 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -292,4 +292,22 @@ struct sxe2_adapter { #define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \ ((struct sxe2_adapter *)(dev)->data->dev_private) +#define SXE2_DEV_TO_PCI(eth_dev) \ + RTE_DEV_TO_PCI((eth_dev)->device) + +struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type); + +s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u64 org_len, u64 org_offset); + +s32 sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, u32 res_type, + u32 item_cnt, u32 item_base); + +void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, u32 res_type); + +s32 sxe2_dev_pci_map_init(struct rte_eth_dev *dev); + +void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev); + #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 07/10] common/sxe2: add ioctl interface for DMA map and unmap 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (5 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 06/10] drivers: support PCI BAR mapping liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 08/10] net/sxe2: support queue setup and control liujie5 ` (3 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement DMA mapping and unmapping functionality using ioctl calls. This allows the driver to configure the hardware's IOMMU/DMA tables, ensuring the device can safely access memory buffers allocated by the userspace. The mapping is established during device initialization or queue setup and is revoked during device closure to prevent memory leaks and ensure hardware security. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 50 +++++++++- drivers/common/sxe2/sxe2_ioctl_chnl.c | 106 ++++++++++++++++++++- drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++ 3 files changed, 163 insertions(+), 2 deletions(-) diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 62bdc93b5c..63873afe4a 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -443,7 +443,7 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) cdev = sxe2_rtedev_to_cdev(&pci_dev->device); if (cdev == NULL) { ret = -ENODEV; - PMD_LOG_ERR(COM, "Fail to get remove device."); + PMD_LOG_ERR(COM, "Fail to get device when remove."); goto l_end; } @@ -467,12 +467,60 @@ static s32 sxe2_common_pci_remove(struct rte_pci_device *pci_dev) return ret; } +static s32 sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev, + void *addr, u64 iova, size_t len) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get device when dma map."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_map(cdev, (u64)(uintptr_t)addr, iova, len); + if (ret) { + PMD_LOG_ERR(COM, "Fail to map dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + +static s32 sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev, + void *addr __rte_unused, u64 iova, size_t len __rte_unused) +{ + struct sxe2_common_device *cdev; + s32 ret = SXE2_ERROR; + + cdev = sxe2_rtedev_to_cdev(&pci_dev->device); + if (cdev == NULL) { + ret = -ENODEV; + PMD_LOG_ERR(COM, "Fail to get device when dma unmap."); + goto l_end; + } + + ret = sxe2_drv_dev_dma_unmap(cdev, iova); + if (ret) { + PMD_LOG_ERR(COM, "Fail to unmap dma map, ret=%d", ret); + goto l_end; + } + +l_end: + return ret; +} + static struct rte_pci_driver sxe2_common_pci_driver = { .driver = { .name = SXE2_COMMON_PCI_DRIVER_NAME, }, .probe = sxe2_common_pci_probe, .remove = sxe2_common_pci_remove, + .dma_map = sxe2_common_pci_dma_map, + .dma_unmap = sxe2_common_pci_dma_unmap, }; static u32 sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table) diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 80fccc6a11..4dfc4fd0fa 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -202,7 +202,7 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) if (cdev->config.kernel_reset) { ret = -EPERM; - PMD_LOG_WARN(COM, "kernel reseted, need restart app."); + PMD_LOG_WARN(COM, "kernel reset, need restart app."); goto l_end; } @@ -220,3 +220,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) l_end: return ret; } + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map) +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size) +{ + struct sxe2_ioctl_iommu_dma_map cmd_params; + enum rte_iova_mode iova_mode; + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + iova_mode = rte_eal_iova_mode(); + if (iova_mode == RTE_IOVA_PA) { + if (cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "iommu not support pa mode"); + ret = -EIO; + } + goto l_end; + } else if (iova_mode == RTE_IOVA_VA) { + if (!cdev->config.support_iommu) { + PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode."); + ret = -EIO; + goto l_end; + } + } + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map)); + cmd_params.vaddr = vaddr; + cmd_params.iova = iova; + cmd_params.size = size; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params); + if (ret < 0) { + PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + +RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap) +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova) +{ + s32 ret = SXE2_SUCCESS; + s32 cmd_fd = 0; + struct sxe2_ioctl_iommu_dma_unmap cmd_params; + + if (cdev->config.kernel_reset) { + ret = -EPERM; + PMD_LOG_WARN(COM, "kernel reset, need restart app."); + goto l_end; + } + + if (!cdev->config.support_iommu) + goto l_end; + + cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev); + if (cmd_fd < 0) { + ret = -EBADF; + PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd); + goto l_end; + } + + PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"", + cmd_fd, iova); + + memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap)); + cmd_params.iova = iova; + + rte_ticketlock_lock(&cdev->config.lock); + ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params); + if (ret < 0) { + PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s", + cmd_fd, ret, strerror(errno)); + ret = -EIO; + rte_ticketlock_unlock(&cdev->config.lock); + goto l_end; + } + rte_ticketlock_unlock(&cdev->config.lock); + +l_end: + return ret; +} + diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h index 376c5e3ac7..e8f983e40e 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h +++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h @@ -47,6 +47,15 @@ __rte_internal s32 sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len); +__rte_internal +s32 +sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, u64 vaddr, + u64 iova, u64 size); + +__rte_internal +s32 +sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, u64 iova); + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 08/10] net/sxe2: support queue setup and control 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (6 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 09/10] drivers: add data path for Rx and Tx liujie5 ` (2 subsequent siblings) 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Add support for Rx and Tx queue setup, release, and management. Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup, rx_queue_release, and tx_queue_release. This includes: - Allocating memory for hardware ring descriptors. - Initializing software ring structures and hardware head/tail pointers. - Implementing proper resource cleanup logic to prevent memory leaks during queue reconfiguration or device close. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_drv_cmd.h | 9 - drivers/net/sxe2/sxe2_ethdev.c | 66 +++- drivers/net/sxe2/sxe2_ethdev.h | 3 + drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++++++++++++++++++++++ drivers/net/sxe2/sxe2_rx.h | 34 ++ drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++++++++++++++++ drivers/net/sxe2/sxe2_tx.h | 32 ++ 8 files changed, 1145 insertions(+), 27 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_rx.c create mode 100644 drivers/net/sxe2/sxe2_rx.h create mode 100644 drivers/net/sxe2/sxe2_tx.c create mode 100644 drivers/net/sxe2/sxe2_tx.h diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 6c9a86423a..8638244d80 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -16,6 +16,8 @@ sources += files( 'sxe2_cmd_chnl.c', 'sxe2_vsi.c', 'sxe2_queue.c', + 'sxe2_tx.c', + 'sxe2_rx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h index 4094442077..f236e30c40 100644 --- a/drivers/net/sxe2/sxe2_drv_cmd.h +++ b/drivers/net/sxe2/sxe2_drv_cmd.h @@ -5,17 +5,8 @@ #ifndef __SXE2_DRV_CMD_H__ #define __SXE2_DRV_CMD_H__ -#ifdef SXE2_DPDK_DRIVER #include "sxe2_type.h" #define SXE2_DPDK_RESOURCE_INSUFFICIENT -#endif - -#ifdef SXE2_LINUX_DRIVER -#ifdef __KERNEL__ -#include <linux/types.h> -#include <linux/if_ether.h> -#endif -#endif #define SXE2_DRV_CMD_MODULE_S (16) #define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF)) diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 4836c338bc..2a07c211bf 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -24,6 +24,8 @@ #include "sxe2_ethdev.h" #include "sxe2_drv_cmd.h" #include "sxe2_cmd_chnl.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -80,14 +82,6 @@ static s32 sxe2_dev_configure(struct rte_eth_dev *dev) return ret; } -static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - -static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused) -{ -} - static s32 sxe2_dev_stop(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -106,16 +100,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -static s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - static s32 sxe2_queues_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -301,6 +285,14 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_stop = sxe2_dev_stop, .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + + .rx_queue_setup = sxe2_rx_queue_setup, + .tx_queue_setup = sxe2_tx_queue_setup, + .rx_queue_release = sxe2_rx_queue_release, + .tx_queue_release = sxe2_tx_queue_release, + + .rxq_info_get = sxe2_rx_queue_info_get, + .txq_info_get = sxe2_tx_queue_info_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, @@ -328,6 +320,44 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter return bar_info; } +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func) +{ + struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt; + struct sxe2_pci_map_segment_info *seg_info = NULL; + struct sxe2_pci_map_bar_info *bar_info = NULL; + void __iomem *addr = NULL; + u8 reg_width = 0; + u8 i = 0; + + bar_info = sxe2_dev_get_bar_info(adapter, res_type); + if (bar_info == NULL) { + PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", + res_type); + goto l_end; + } + seg_info = bar_info->seg_info; + + reg_width = map_ctxt->addr_info[res_type].reg_width; + if (reg_width == 0) { + PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d", + res_type); + goto l_end; + } + + for (i = 0; i < bar_info->map_cnt; i++) { + seg_info = &bar_info->seg_info[i]; + if (res_type == seg_info->type) { + addr = (void __iomem *)((uintptr_t)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func); + goto l_end; + } + } + +l_end: + return addr; +} + static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps) { diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 698e2ee4a2..4ef7854479 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -295,6 +295,9 @@ struct sxe2_adapter { #define SXE2_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) +void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, + enum sxe2_pci_map_resource res_type, u16 idx_in_func); + struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, enum sxe2_pci_map_resource res_type); diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c new file mode 100644 index 0000000000..6b42297382 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.c @@ -0,0 +1,579 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> + +#include "sxe2_ethdev.h" +#include "sxe2_queue.h" +#include "sxe2_rx.h" +#include "sxe2_cmd_chnl.h" + +#include "sxe2_osal.h" +#include "sxe2_common_log.h" + +static void __iomem *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL, queue_id); +} + +static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq) +{ + rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0); +} + +static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq) +{ + u16 i = 0; + u16 len = 0; + static const union sxe2_rx_desc zeroed_desc = {{0}}; + + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + for (i = 0; i < len; ++i) + rxq->desc_ring[i] = zeroed_desc; + + memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf)); + for (i = rxq->ring_depth; i < len; i++) + rxq->buffer_ring[i] = &rxq->fake_mbuf; + + rxq->hold_num = 0; + rxq->next_ret_pkt = 0; + rxq->processing_idx = 0; + rxq->completed_pkts_num = 0; + rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1; + + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; + + rxq->realloc_num = 0; + rxq->realloc_start = 0; +} + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq) +{ + u16 i; + + if (rxq->buffer_ring != NULL) { + for (i = 0; i < rxq->ring_depth; i++) { + if (rxq->buffer_ring[i] != NULL) { + rte_pktmbuf_free(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + + if (rxq->completed_pkts_num) { + for (i = 0; i < rxq->completed_pkts_num; ++i) { + if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) { + rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]); + rxq->completed_buf[rxq->next_ret_pkt + i] = NULL; + } + } + rxq->completed_pkts_num = 0; + } +} + +const struct sxe2_rxq_ops sxe2_default_rxq_ops = { + .queue_reset = sxe2_rx_queue_reset, + .mbufs_release = sxe2_rx_queue_mbufs_release, +}; + +static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void) +{ + return sxe2_default_rxq_ops; +} + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, + u16 queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sxe2_rx_queue *rxq = NULL; + + if (queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u", + queue_id, dev->data->nb_rx_queues); + goto end; + } + + rxq = dev->data->rx_queues[queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->mp = rxq->mb_pool; + qinfo->nb_desc = rxq->ring_depth; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_rx_queue *rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + ret = sxe2_drv_rxq_switch(adapter, rxq, false); + if (ret) { + PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d", + rx_queue_id, ret); + if (ret == -EPERM) + goto l_free; + goto l_end; + } + +l_free: + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; +l_end: + return ret; +} + +static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq) +{ + if (rxq != NULL) { + rxq->ops.mbufs_release(rxq); + if (rxq->buffer_ring != NULL) { + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + } + rte_memzone_free(rxq->mz); + rte_free(rxq); + } +} + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, + u16 queue_idx) +{ + (void)sxe2_rx_queue_stop(dev, queue_idx); + sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_rxq; + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + if (data->rx_queues[nb_rxq] == NULL) + continue; + sxe2_rx_queue_release(dev, nb_rxq); + data->rx_queues[nb_rxq] = NULL; + } + data->nb_rx_queues = 0; +} + +static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_rx_queue *rxq; + const struct rte_memzone *tz; + u16 len; + + if (dev->data->rx_queues[queue_idx] != NULL) { + sxe2_rx_queue_release(dev, queue_idx); + dev->data->rx_queues[queue_idx] = NULL; + } + + rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx); + goto l_end; + } + + rxq->ring_depth = ring_depth; + len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM; + + rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring", + sizeof(struct rte_mbuf *) * len, + RTE_CACHE_LINE_SIZE, socket_id); + + if (!rxq->buffer_ring) { + PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed"); + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx, + SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(RX, "Rxq malloc desc mem failed"); + rte_free(rxq->buffer_ring); + rxq->buffer_ring = NULL; + rte_free(rxq); + rxq = NULL; + goto l_end; + } + + rxq->mz = tz; + memset(tz->addr, 0, SXE2_RX_RING_SIZE); + rxq->base_addr = tz->iova; + rxq->desc_ring = (union sxe2_rx_desc *)tz->addr; + +l_end: + return rxq; +} + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_rx_queue *rxq; + u64 offloads; + s32 ret; + u16 rx_nseg; + u16 i; + + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + queue_idx, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 || + nb_desc > SXE2_MAX_RING_DESC || + nb_desc < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc); + ret = -EINVAL; + goto l_end; + } + + if (mp != NULL) + rx_nseg = 1; + else + rx_nseg = rx_conf->rx_nseg; + + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + + if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = -EINVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) { + PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u", + dev->data->port_id, queue_idx, rx_nseg); + ret = -EINVAL; + goto l_end; + } + + if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) { + PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.", + dev->data->port_id, queue_idx); + ret = -EINVAL; + goto l_end; + } + + rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (rxq == NULL) { + PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx); + ret = -ENOMEM; + goto l_end; + } + + if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + dev->data->lro = 1; + + if (rx_nseg > 1) { + for (i = 0; i < rx_nseg; i++) { + rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + } + rxq->mb_pool = rxq->rx_seg[0].mp; + } else { + rxq->mb_pool = mp; + } + + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->port_id = dev->data->port_id; + rxq->offloads = offloads; + if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->queue_id = queue_idx; + rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx; + rxq->drop_en = rx_conf->rx_drop_en; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->vsi = vsi; + rxq->ops = sxe2_rx_default_ops_get(); + rxq->ops.queue_reset(rxq); + dev->data->rx_queues[queue_idx] = rxq; + + ret = SXE2_SUCCESS; +l_end: + return ret; +} + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp) +{ + return rte_mbuf_raw_alloc(mp); +} + +static s32 __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq) +{ + struct rte_mbuf **buf_ring = rxq->buffer_ring; + struct rte_mbuf *mbuf = NULL; + struct rte_mbuf *mbuf_pay; + volatile union sxe2_rx_desc *desc; + u64 dma_addr; + s32 ret; + u16 i, j; + + for (i = 0; i < rxq->ring_depth; i++) { + mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); + if (mbuf == NULL) { + PMD_LOG_ERR(RX, "Rx queue is not available or setup"); + ret = -ENOMEM; + goto l_err_free_mbuf; + } + + buf_ring[i] = mbuf; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + desc = &rxq->desc_ring[i]; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + mbuf->next = NULL; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = dma_addr; + } else { + mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX"); + ret = -ENOMEM; + goto l_err_free_mbuf; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + desc->read.hdr_addr = dma_addr; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + desc->read.rsvd1 = 0; + desc->read.rsvd2 = 0; +#endif + } + + ret = SXE2_SUCCESS; + goto l_end; + +l_err_free_mbuf: + for (j = 0; j <= i; j++) { + if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { + rte_pktmbuf_free(buf_ring[j]->next); + buf_ring[j]->next = NULL; + } + + if (buf_ring[j] != NULL) { + rte_pktmbuf_free(buf_ring[j]); + buf_ring[j] = NULL; + } + } + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id) +{ + struct sxe2_rx_queue *rxq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_LOG_ERR(RX, "Rx queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + ret = -EINVAL; + goto l_end; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (rxq == NULL) { + PMD_LOG_ERR(RX, "Rx queue %u is not available or setup", + rx_queue_id); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->rx_queue_state[rx_queue_id] == + RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_rx_queue_mbufs_alloc(rxq); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail", + rx_queue_id); + ret = -ENOMEM; + goto l_end; + } + + sxe2_rx_head_tail_init(adapter, rxq); + + ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1); + if (ret) { + PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d", + rx_queue_id, ret); + + (void)sxe2_drv_rxq_switch(adapter, rxq, false); + rxq->ops.mbufs_release(rxq); + rxq->ops.queue_reset(rxq); + goto l_end; + } + + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_rx_queue *rxq; + u16 nb_rxq; + u16 nb_started_rxq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (!rxq || rxq->rx_deferred_start) + continue; + + ret = sxe2_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + goto l_free_started_queue; + } + + rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0, + rte_memory_order_relaxed); + rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0, + rte_memory_order_relaxed); + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++) + (void)sxe2_rx_queue_stop(dev, nb_started_rxq); +l_end: + return ret; +} + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev; + struct sxe2_rx_queue *rxq = NULL; + s32 ret; + u16 nb_rxq; + PMD_INIT_FUNC_TRACE(); + + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = sxe2_rx_queue_stop(dev, nb_rxq); + if (ret) { + PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq); + continue; + } + + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq) { + sw_stats_prev->ipackets += + rte_atomic_load_explicit(&rxq->sw_stats.pkts, + rte_memory_order_relaxed); + sw_stats_prev->ierrors += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->ibytes += + rte_atomic_load_explicit(&rxq->sw_stats.bytes, + rte_memory_order_relaxed); + + sw_stats_prev->rx_sw_unicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_broadcast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_multicast_packets += + rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_packets += + rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts, + rte_memory_order_relaxed); + sw_stats_prev->rx_sw_drop_bytes += + rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes, + rte_memory_order_relaxed); + } + } +} diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h new file mode 100644 index 0000000000..7c6239b387 --- /dev/null +++ b/drivers/net/sxe2/sxe2_rx.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_RX_H__ +#define __SXE2_RX_H__ + +#include "sxe2_queue.h" + +s32 __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + +s32 __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, u16 rx_queue_id); + +void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq); + +void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_rxq_info *qinfo); + +s32 __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, u16 rx_queue_id); + +s32 __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev); + +struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp); + +#endif diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c new file mode 100644 index 0000000000..b043611c8d --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.c @@ -0,0 +1,447 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include "sxe2_tx.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_cmd_chnl.h" + +static void __iomem *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, u16 queue_id) +{ + return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, queue_id); +} + +static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq) +{ + txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0); +} + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq) +{ + u16 prev, i; + volatile union sxe2_tx_data_desc *txd; + static const union sxe2_tx_data_desc zeroed_desc = {{0}}; + struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring; + + for (i = 0; i < txq->ring_depth; i++) + txq->desc_ring[i] = zeroed_desc; + + prev = txq->ring_depth - 1; + for (i = 0; i < txq->ring_depth; i++) { + txd = &txq->desc_ring[i]; + if (txd == NULL) + continue; + + txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE); + tx_buffer[i].mbuf = NULL; + tx_buffer[i].last_id = i; + tx_buffer[prev].next_id = i; + prev = i; + } + + txq->desc_used_num = 0; + txq->desc_free_num = txq->ring_depth - 1; + txq->next_use = 0; + txq->next_clean = txq->ring_depth - 1; + txq->next_dd = txq->rs_thresh - 1; + txq->next_rs = txq->rs_thresh - 1; +} + +void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq) +{ + u32 i; + + if (txq != NULL && txq->buffer_ring != NULL) { + for (i = 0; i < txq->ring_depth; i++) { + if (txq->buffer_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf); + txq->buffer_ring[i].mbuf = NULL; + } + } + } +} + +static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL && txq->buffer_ring != NULL) + rte_free(txq->buffer_ring); +} + +const struct sxe2_txq_ops sxe2_default_txq_ops = { + .queue_reset = sxe2_tx_queue_reset, + .mbufs_release = sxe2_tx_queue_mbufs_release, + .buffer_ring_free = sxe2_tx_buffer_ring_free, +}; + +static struct sxe2_txq_ops sxe2_tx_default_ops_get(void) +{ + return sxe2_default_txq_ops; +} + +static s32 sxe2_txq_arg_validate(struct rte_eth_dev *dev, u16 ring_depth, + u16 *rs_thresh, u16 *free_thresh, const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + + if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 || + ring_depth > SXE2_MAX_RING_DESC || + ring_depth < SXE2_MIN_RING_DESC) { + PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth); + ret = -EINVAL; + goto l_end; + } + + *free_thresh = (u16)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + *rs_thresh = (u16)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + + if (*rs_thresh >= (ring_depth - 2)) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number " + "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)", + *rs_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if (*free_thresh >= (ring_depth - 3)) { + PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number " + "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)", + *free_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if (*rs_thresh > *free_thresh) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to " + "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)", + *free_thresh, *rs_thresh, dev->data->port_id); + ret = -EINVAL; + goto l_end; + } + + if ((ring_depth % *rs_thresh) != 0) { + PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the " + "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)", + *rs_thresh, dev->data->port_id, ring_depth); + ret = -EINVAL; + goto l_end; + } + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct sxe2_tx_queue *txq = NULL; + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + goto end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id); + goto end; + } + + qinfo->nb_desc = txq->ring_depth; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; + +end: + return; +} + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_tx_queue *txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == + RTE_ETH_QUEUE_STATE_STOPPED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + ret = SXE2_SUCCESS; + goto l_end; + } + + ret = sxe2_drv_txq_switch(adapter, txq, false); + if (ret) { + PMD_LOG_ERR(TX, "Failed to switch tx queue %u off", + queue_id); + goto l_end; + } + + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq) +{ + if (txq != NULL) { + txq->ops.mbufs_release(txq); + txq->ops.buffer_ring_free(txq); + + rte_memzone_free(txq->mz); + rte_free(txq); + } +} + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx) +{ + (void)sxe2_tx_queue_stop(dev, queue_idx); + sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; +} + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + if (data->tx_queues[nb_txq] == NULL) + continue; + + sxe2_tx_queue_release(dev, nb_txq); + data->tx_queues[nb_txq] = NULL; + } + data->nb_tx_queues = 0; +} + +static struct sxe2_tx_queue +*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, u16 queue_idx, + u16 ring_depth, u32 socket_id) +{ + struct sxe2_tx_queue *txq; + const struct rte_memzone *tz; + + if (dev->data->tx_queues[queue_idx]) { + sxe2_tx_queue_release(dev, queue_idx); + dev->data->tx_queues[queue_idx] = NULL; + } + + txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx); + goto l_end; + } + + tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx, + sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC, + SXE2_DESC_ADDR_ALIGN, socket_id); + if (tz == NULL) { + PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring", + sizeof(struct sxe2_tx_buffer) * ring_depth, + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->buffer_ring == NULL) { + PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx); + rte_memzone_free(tz); + rte_free(txq); + txq = NULL; + goto l_end; + } + + txq->mz = tz; + txq->base_addr = tz->iova; + txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr; + +l_end: + return txq; +} + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf) +{ + s32 ret = SXE2_SUCCESS; + u16 tx_rs_thresh; + u16 tx_free_thresh; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi; + u64 offloads; + PMD_INIT_FUNC_TRACE(); + + if (queue_idx >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_idx, dev->data->nb_tx_queues); + ret = -EINVAL; + goto end; + } + + ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx); + goto end; + } + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id); + if (txq == NULL) { + PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx); + ret = -ENOMEM; + goto end; + } + + txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + txq->ring_depth = nb_desc; + txq->rs_thresh = tx_rs_thresh; + txq->free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; + txq->ops = sxe2_tx_default_ops_get(); + txq->ops.queue_reset(txq); + + dev->data->tx_queues[queue_idx] = txq; + ret = SXE2_SUCCESS; + +end: + return ret; +} + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id) +{ + s32 ret = SXE2_SUCCESS; + struct sxe2_tx_queue *txq; + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + PMD_INIT_FUNC_TRACE(); + + if (queue_id >= dev->data->nb_tx_queues) { + PMD_LOG_ERR(TX, "tx queue:%u is out of range:%u", + queue_id, dev->data->nb_tx_queues); + ret = -EINVAL; + goto l_end; + } + + if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { + ret = SXE2_SUCCESS; + goto l_end; + } + + txq = dev->data->tx_queues[queue_id]; + if (txq == NULL) { + PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id); + ret = -EINVAL; + goto l_end; + } + + ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1); + if (ret) { + PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id); + + (void)sxe2_drv_txq_switch(adapter, txq, false); + txq->ops.mbufs_release(txq); + txq->ops.queue_reset(txq); + goto l_end; + } + + sxe2_tx_tail_init(adapter, txq); + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct sxe2_tx_queue *txq; + u16 nb_txq; + u16 nb_started_txq; + s32 ret; + PMD_INIT_FUNC_TRACE(); + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (!txq || txq->tx_deferred_start) + continue; + + ret = sxe2_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq); + goto l_free_started_queue; + } + } + ret = SXE2_SUCCESS; + goto l_end; + +l_free_started_queue: + for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++) + (void)sxe2_tx_queue_stop(dev, nb_started_txq); + +l_end: + return ret; +} + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + u16 nb_txq; + s32 ret; + + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = sxe2_tx_queue_stop(dev, nb_txq); + if (ret) { + PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq); + continue; + } + } +} diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h new file mode 100644 index 0000000000..58b668e337 --- /dev/null +++ b/drivers/net/sxe2/sxe2_tx.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TX_H__ +#define __SXE2_TX_H__ +#include "sxe2_queue.h" + +void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, u16 queue_id); + +void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq); + +s32 __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, u16 queue_id); + +s32 __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev, + u16 queue_idx, u16 nb_desc, u32 socket_id, + const struct rte_eth_txconf *tx_conf); + +void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, u16 queue_idx); + +void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev); + +void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, u16 queue_id, + struct rte_eth_txq_info *qinfo); + +s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev); + +void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev); + +#endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 09/10] drivers: add data path for Rx and Tx 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (7 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 08/10] net/sxe2: support queue setup and control liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-12 11:36 ` [PATCH v13 10/10] net/sxe2: add vectorized " liujie5 2026-05-13 14:45 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> Implement receive and transmit burst functions for sxe2 PMD. Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path interfaces. The implementation includes: - Efficient descriptor fetching and mbuf allocation for Rx. - Descriptor setup and checksum offload handling for Tx. - Buffer recycling and hardware tail pointer updates. - Performance-oriented loop unrolling and prefetching where applicable. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/common/sxe2/sxe2_common.c | 1 + drivers/common/sxe2/sxe2_common_log.h | 1 - drivers/common/sxe2/sxe2_errno.h | 3 - drivers/common/sxe2/sxe2_ioctl_chnl.c | 8 +- drivers/common/sxe2/sxe2_osal.h | 2 - drivers/net/sxe2/meson.build | 2 + drivers/net/sxe2/sxe2_ethdev.c | 22 +- drivers/net/sxe2/sxe2_txrx.c | 247 +++++++ drivers/net/sxe2/sxe2_txrx.h | 21 + drivers/net/sxe2/sxe2_txrx_poll.c | 945 ++++++++++++++++++++++++++ 10 files changed, 1237 insertions(+), 15 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx.c create mode 100644 drivers/net/sxe2/sxe2_txrx.h create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c index 63873afe4a..bfb54eec49 100644 --- a/drivers/common/sxe2/sxe2_common.c +++ b/drivers/common/sxe2/sxe2_common.c @@ -664,6 +664,7 @@ sxe2_common_init(void) goto l_end; pthread_mutex_init(&sxe2_common_devices_list_lock, NULL); + sxe2_common_pci_init(); sxe2_commoin_inited = true; diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h index a7d2157610..cbb53263b5 100644 --- a/drivers/common/sxe2/sxe2_common_log.h +++ b/drivers/common/sxe2/sxe2_common_log.h @@ -81,4 +81,3 @@ extern s32 sxe2_log_hw; #define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>") #endif /* __SXE2_COMMON_LOG_H__ */ - diff --git a/drivers/common/sxe2/sxe2_errno.h b/drivers/common/sxe2/sxe2_errno.h index 89a715eaef..1257319edf 100644 --- a/drivers/common/sxe2/sxe2_errno.h +++ b/drivers/common/sxe2/sxe2_errno.h @@ -50,9 +50,6 @@ enum sxe2_status { SXE2_ERR_NOLCK = -ENOLCK, SXE2_ERR_NOSYS = -ENOSYS, SXE2_ERR_NOTEMPTY = -ENOTEMPTY, - SXE2_ERR_ILSEQ = -EILSEQ, - SXE2_ERR_NODATA = -ENODATA, - SXE2_ERR_CANCELED = -ECANCELED, SXE2_ERR_TIMEDOUT = -ETIMEDOUT, SXE2_ERROR = -150, diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c index 4dfc4fd0fa..b9224cf197 100644 --- a/drivers/common/sxe2/sxe2_ioctl_chnl.c +++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c @@ -178,13 +178,13 @@ void goto l_err; } - PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"", + PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"", bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); virt = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset)); if (virt == MAP_FAILED) { - PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s", + PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s", cmd_fd, len, offset, strerror(errno)); goto l_err; } @@ -206,12 +206,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, u64 len) goto l_end; } - PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx", + PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"", virt, len); ret = munmap(virt, len); if (ret < 0) { - PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s", + PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s", virt, len, strerror(errno)); ret = -EIO; goto l_end; diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h index d77057e7ee..20d1accd5f 100644 --- a/drivers/common/sxe2/sxe2_osal.h +++ b/drivers/common/sxe2/sxe2_osal.h @@ -29,8 +29,6 @@ #define BIT_ULL(a) (1ULL << (a)) #endif -#define MIN(a, b) ((a) < (b) ? (a) : (b)) - #define BITS_PER_BYTE 8 #define IS_UNICAST_ETHER_ADDR(addr) \ diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index 8638244d80..b348dd71a1 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -18,6 +18,8 @@ sources += files( 'sxe2_queue.c', 'sxe2_tx.c', 'sxe2_rx.c', + 'sxe2_txrx_poll.c', + 'sxe2_txrx.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 2a07c211bf..7e9a842eb9 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -26,6 +26,7 @@ #include "sxe2_cmd_chnl.h" #include "sxe2_tx.h" #include "sxe2_rx.h" +#include "sxe2_txrx.h" #include "sxe2_common.h" #include "sxe2_common_log.h" #include "sxe2_host_regs.h" @@ -131,6 +132,9 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) goto l_end; } + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + ret = sxe2_queues_start(dev); if (ret) { PMD_LOG_ERR(INIT, "enable queues failed"); @@ -348,8 +352,8 @@ void __iomem *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter, for (i = 0; i < bar_info->map_cnt; i++) { seg_info = &bar_info->seg_info[i]; if (res_type == seg_info->type) { - addr = (void __iomem *)((uintptr_t)seg_info->addr + - seg_info->page_inner_offset + reg_width * idx_in_func); + addr = (uint8_t __iomem *)seg_info->addr + + seg_info->page_inner_offset + reg_width * idx_in_func; goto l_end; } } @@ -460,8 +464,9 @@ s32 sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter, map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx, aligned_len, aligned_offset); if (!map_addr) { - PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%zu, page_size=%zu", - res_type, org_len, page_size); + PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64 + ", offset=%" PRIu64 ", page_size=%zu", + res_type, org_len, org_offset, page_size); ret = -EFAULT; goto l_end; } @@ -745,10 +750,17 @@ static s32 sxe2_dev_init(struct rte_eth_dev *dev, struct sxe2_dev_kvargs_info *k PMD_INIT_FUNC_TRACE(); + sxe2_set_common_function(dev); + dev->dev_ops = &sxe2_eth_dev_ops; - if (rte_eal_process_type() != RTE_PROC_PRIMARY) + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + sxe2_rx_mode_func_set(dev); + sxe2_tx_mode_func_set(dev); + if (ret != SXE2_SUCCESS) + PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret); goto l_end; + } ret = sxe2_hw_init(dev); if (ret) { diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c new file mode 100644 index 0000000000..a7b94e8967 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -0,0 +1,247 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_txrx.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_ethdev.h" + +#include "sxe2_common_log.h" +#include "sxe2_errno.h" +#include "sxe2_osal.h" +#include "sxe2_cmd_chnl.h" +#if defined(RTE_ARCH_ARM64) +#include <rte_cpuflags.h> +#endif + +static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + s32 ret; + u16 desc_idx; + + if (unlikely(offset >= txq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + desc_idx = txq->next_use + offset; + desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); + if (desc_idx >= txq->ring_depth) { + desc_idx -= txq->ring_depth; + if (desc_idx >= txq->ring_depth) + desc_idx -= txq->ring_depth; + } + + if (desc_idx == 0) + desc_idx = txq->rs_thresh - 1; + else + desc_idx -= 1; + + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == + (txq->desc_ring[desc_idx].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) + ret = RTE_ETH_TX_DESC_DONE; + else + ret = RTE_ETH_TX_DESC_FULL; + +l_end: + return ret; +} + +static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) +{ + struct rte_mbuf *m_seg = mbuf; + + while (m_seg != NULL) { + if (m_seg->data_len == 0) + return SXE2_ERR_INVAL; + m_seg = m_seg->next; + } + + return SXE2_SUCCESS; +} + +u16 sxe2_tx_pkts_prepare(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct rte_mbuf *mbuf; + u64 ol_flags = 0; + s32 ret = SXE2_SUCCESS; + s32 i = 0; + + for (i = 0; i < nb_pkts; i++) { + mbuf = tx_pkts[i]; + if (!mbuf) + continue; + ol_flags = mbuf->ol_flags; + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX || + mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) || + (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) || + (mbuf->nb_segs > txq->ring_depth) || + (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + + if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { + rte_errno = -SXE2_ERR_INVAL; + goto l_end; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } +#endif + ret = rte_net_intel_cksum_prepare(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + + ret = sxe2_tx_mbuf_empty_check(mbuf); + if (ret != SXE2_SUCCESS) { + rte_errno = -ret; + goto l_end; + } + } + +l_end: + return i; +} + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 tx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + adapter->q_ctxt.tx_mode_flags = tx_mode_flags; + PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", + tx_mode_flags, dev->data->port_id); +} + +static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + s32 ret; + + if (unlikely(offset >= rxq->ring_depth)) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + + if (offset >= rxq->ring_depth - rxq->hold_num) { + ret = RTE_ETH_RX_DESC_UNAVAIL; + goto l_end; + } + + if (rxq->processing_idx + offset >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; + else + desc = &rxq->desc_ring[rxq->processing_idx + offset]; + + if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) + ret = RTE_ETH_RX_DESC_DONE; + else + ret = RTE_ETH_RX_DESC_AVAIL; + +l_end: + PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", + offset, ret, rxq->queue_id, rxq->port_id); + return ret; +} + +static s32 sxe2_rx_queue_count(void *rx_queue) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc; + u16 done_num = 0; + + desc = &rxq->desc_ring[rxq->processing_idx]; + while ((done_num < rxq->ring_depth) && + (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK)) { + done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + if (rxq->processing_idx + done_num >= rxq->ring_depth) + desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth]; + else + desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; + } + + PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", + done_num, rxq->queue_id, rxq->port_id); + + return done_num; +} + +static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + if (0 != (rxq->offloads & offload)) { + en = true; + goto l_end; + } + } + +l_end: + return en; +} + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) +{ + struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); + u32 rx_mode_flags = 0; + + PMD_INIT_FUNC_TRACE(); + + if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; + else + dev->rx_pkt_burst = sxe2_rx_pkts_scattered; + + PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", + rx_mode_flags, dev->data->port_id); + adapter->q_ctxt.rx_mode_flags = rx_mode_flags; +} + +void sxe2_set_common_function(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + dev->rx_queue_count = sxe2_rx_queue_count; + dev->rx_descriptor_status = sxe2_rx_desciptor_status; + + dev->tx_descriptor_status = sxe2_tx_desciptor_status; + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; +} diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h new file mode 100644 index 0000000000..e6f671e3dc --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef SXE2_TXRX_H +#define SXE2_TXRX_H +#include <ethdev_driver.h> +#include "sxe2_queue.h" + +void sxe2_set_common_function(struct rte_eth_dev *dev); + +u16 sxe2_tx_pkts_prepare(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); + +void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); + +void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); + +void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); + +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c new file mode 100644 index 0000000000..02533abfd5 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_poll.c @@ -0,0 +1,945 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <rte_common.h> +#include <rte_net.h> +#include <rte_vect.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <ethdev_driver.h> +#include <unistd.h> + +#include "sxe2_osal.h" +#include "sxe2_txrx_common.h" +#include "sxe2_txrx_poll.h" +#include "sxe2_txrx.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +static __rte_always_inline s32 +sxe2_tx_bufs_free(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1]; + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) { + mbuf = buffer[0].mbuf; + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = buffer[i].mbuf; + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + rte_mempool_put(buffer->mbuf->pool, buffer->mbuf); + buffer->mbuf = NULL; + } + } + } else { + for (i = 0; i < rs_thresh; ++i, ++buffer) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + buffer->mbuf = NULL; + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline s32 sxe2_tx_cleanup(struct sxe2_tx_queue *txq) +{ + s32 ret = SXE2_SUCCESS; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring; + u16 ring_depth = txq->ring_depth; + u16 next_clean = txq->next_clean; + u16 clean_last; + u16 clean_num; + + clean_last = next_clean + txq->rs_thresh; + if (clean_last >= ring_depth) + clean_last = clean_last - ring_depth; + + clean_last = buffer_ring[clean_last].last_id; + if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) != + (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) { + PMD_LOG_DEBUG(TX, "desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64, + clean_last, txq->port_id, + txq->queue_id, txq->desc_ring[clean_last].wb.dd); + ret = SXE2_ERR_DESC_NO_DONE; + goto l_end; + } + + if (clean_last > next_clean) + clean_num = clean_last - next_clean; + else + clean_num = ring_depth - next_clean + clean_last; + + desc_ring[clean_last].wb.dd = 0; + + txq->next_clean = clean_last; + txq->desc_free_num += clean_num; + + ret = SXE2_SUCCESS; + +l_end: + return ret; +} + +static __rte_always_inline u16 +sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt) +{ + struct rte_mbuf *m_seg = tx_pkt; + u16 count = 0; + + while (m_seg != NULL) { + count += DIV_ROUND_UP(m_seg->data_len, + SXE2_TX_MAX_DATA_NUM_PER_DESC); + m_seg = m_seg->next; + } + + return count; +} + +static __rte_always_inline void +sxe2_tx_desc_checksum_fill(u64 offloads, u32 *desc_cmd, u32 *desc_offset, + union sxe2_tx_offload_info ol_info) +{ + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len); + } + + if (offloads & RTE_MBUF_F_TX_TCP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + if (offloads & RTE_MBUF_F_TX_UDP_SEG) { + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + goto l_end; + } + + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len); + break; + default: + + break; + } + +l_end: + return; +} + +static __rte_always_inline u64 +sxe2_tx_data_desc_build_cobt(u32 cmd, u32 offset, u16 buf_size, u16 l2tag) +{ + return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA | + (((u64)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) | + (((u64)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) | + (((u64)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) | + (((u64)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)); +} + +u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = tx_queue; + struct sxe2_tx_buffer *buffer_ring; + struct sxe2_tx_buffer *buffer; + struct sxe2_tx_buffer *next_buffer; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + volatile union sxe2_tx_data_desc *desc_ring; + volatile union sxe2_tx_data_desc *desc; + volatile struct sxe2_tx_context_desc *ctxt_desc; + union sxe2_tx_offload_info ol_info; + struct sxe2_vsi *vsi = txq->vsi; + rte_iova_t buf_dma_addr; + u64 offloads; + u64 desc_type_cmd_tso_mss; + u32 desc_cmd; + u32 desc_offset; + u32 desc_tag; + u32 desc_tunneling_params; + u16 ipsec_offset; + u16 ctxt_desc_num; + u16 desc_sum_num; + u16 tx_num; + u16 seg_len; + u16 next_use; + u16 last_use; + u16 desc_l2tag2; + + buffer_ring = txq->buffer_ring; + desc_ring = txq->desc_ring; + next_use = txq->next_use; + buffer = &buffer_ring[next_use]; + + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_cleanup(txq); + + for (tx_num = 0; tx_num < nb_pkts; tx_num++) { + tx_pkt = *tx_pkts++; + desc_cmd = 0; + desc_offset = 0; + desc_tag = 0; + desc_tunneling_params = 0; + ipsec_offset = 0; + offloads = tx_pkt->ol_flags; + ol_info.l2_len = tx_pkt->l2_len; + ol_info.l3_len = tx_pkt->l3_len; + ol_info.l4_len = tx_pkt->l4_len; + ol_info.tso_segsz = tx_pkt->tso_segsz; + ol_info.outer_l2_len = tx_pkt->outer_l2_len; + ol_info.outer_l3_len = tx_pkt->outer_l3_len; + + ctxt_desc_num = (offloads & + SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0; + if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW)) + ctxt_desc_num = 1; + + if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) + desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num; + else + desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num; + + last_use = next_use + desc_sum_num - 1; + if (last_use >= txq->ring_depth) + last_use = last_use - txq->ring_depth; + + if (desc_sum_num > txq->desc_free_num) { + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + + if (unlikely(desc_sum_num > txq->rs_thresh)) { + while (desc_sum_num > txq->desc_free_num) + if (unlikely(sxe2_tx_cleanup(txq) != 0)) + goto l_exit_logic; + } + } + + desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len); + + if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) { + sxe2_tx_desc_checksum_fill(offloads, &desc_cmd, + &desc_offset, ol_info); + } + + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + desc_tag = tx_pkt->vlan_tci; + } + + if (ctxt_desc_num) { + ctxt_desc = (volatile struct sxe2_tx_context_desc *) + &desc_ring[next_use]; + desc_l2tag2 = 0; + desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT; + + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + if (offloads & RTE_MBUF_F_TX_QINQ) { + desc_l2tag2 = tx_pkt->vlan_tci_outer; + desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK; + } + + ctxt_desc->tunneling_params = + rte_cpu_to_le_32(desc_tunneling_params); + ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2); + ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss); + ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + } + + m_seg = tx_pkt; + + do { + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + if (buffer->mbuf) { + rte_pktmbuf_free_seg(buffer->mbuf); + buffer->mbuf = NULL; + } + + buffer->mbuf = m_seg; + seg_len = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) && + unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) { + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset, + SXE2_TX_MAX_DATA_NUM_PER_DESC, + desc_tag); + buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC; + seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC; + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + desc = &desc_ring[next_use]; + next_buffer = &buffer_ring[buffer->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf); + } + + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(desc_cmd, + desc_offset, seg_len, desc_tag); + + buffer->last_id = last_use; + next_use = buffer->next_id; + buffer = next_buffer; + + m_seg = m_seg->next; + } while (m_seg); + + desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP; + txq->desc_used_num += desc_sum_num; + txq->desc_free_num -= desc_sum_num; + + if (txq->desc_used_num >= txq->rs_thresh) { + PMD_LOG_DEBUG(TX, "Tx pkts set RS bit." + "last_use=%u port_id=%u, queue_id=%u", + last_use, txq->port_id, txq->queue_id); + desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS; + txq->desc_used_num = 0; + } + + desc->read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT); + } + +l_exit_logic: + if (tx_num == 0) + goto l_end; + goto l_end_of_tx; +l_end_of_tx: + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, tx_num); + + txq->next_use = next_use; + +l_end: + return tx_num; +} + +static __rte_always_inline void +sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 desc_offset; + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, (*tx_pkts)->data_len, 0); +} +static __rte_always_inline void +sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf **tx_pkts) +{ + rte_iova_t buf_dma_addr; + u32 i; + u32 desc_offset; + for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) { + buf_dma_addr = rte_mbuf_data_iova(*tx_pkts); + desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len); + desc->read.type_cmd_off_bsz_l2t = + sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP, + desc_offset, + (*tx_pkts)->data_len, + 0); + } +} + +static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use]; + volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use]; + u32 i, j; + u32 mainpart; + u32 leftover; + mainpart = nb_pkts & ((u32)~SXE2_TX_FILL_PER_LOOP_MASK); + leftover = nb_pkts & ((u32)SXE2_TX_FILL_PER_LOOP_MASK); + for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) { + for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j) + (buffer + i + j)->mbuf = *(tx_pkts + i + j); + sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i); + sxe2_tx_data_desc_fill(desc + mainpart + i, + tx_pkts + mainpart + i); + } + } +} + +static inline u16 sxe2_tx_pkts_batch(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; + volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring; + u16 res_num = 0; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_DEBUG(TX, "Tx batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + txq->desc_free_num -= nb_pkts; + if ((txq->next_use + nb_pkts) > txq->ring_depth) { + res_num = txq->ring_depth - txq->next_use; + sxe2_tx_ring_fill(txq, tx_pkts, res_num); + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs = txq->rs_thresh - 1; + txq->next_use = 0; + } + sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num); + txq->next_use = txq->next_use + (nb_pkts - res_num); + if (txq->next_use > txq->next_rs) { + desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + if (txq->next_rs >= txq->ring_depth) + txq->next_rs = txq->rs_thresh - 1; + } + if (txq->next_use >= txq->ring_depth) + txq->next_use = 0; + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, txq->next_use, nb_pkts); + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use); +l_end: + return nb_pkts; +} + +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 tx_done_num; + u16 tx_once_num; + u16 tx_need_num; + if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) { + tx_done_num = sxe2_tx_pkts_batch(tx_queue, + tx_pkts, nb_pkts); + goto l_end; + } + tx_done_num = 0; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM); + tx_once_num = sxe2_tx_pkts_batch(tx_queue, + &tx_pkts[tx_done_num], tx_need_num); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } +l_end: + return tx_done_num; +} + +static inline void +sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, u16 hold_num, u16 rx_id) +{ + hold_num += rxq->hold_num; + + if (hold_num > rxq->rx_free_thresh) { + rx_id = (u16)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1)); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id); + hold_num = 0; + } + rxq->hold_num = hold_num; +} + +static inline u64 +sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq, + union sxe2_rx_desc *desc) +{ + u64 flags = 0; + u64 desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK))) + goto l_end; + + if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK))) + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + else + flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) { + flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD); + goto l_end; + } + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK))) + flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + else + flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + + if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK))) + flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + else + flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + + if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK))) + flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + +l_end: + return flags; +} + +static __rte_always_inline void +sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + u64 qword1; + u64 pkt_flags; + qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + + mbuf->ol_flags = 0; + mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)]; + + pkt_flags = sxe2_rx_desc_error_para(rxq, rxd); + + mbuf->ol_flags |= pkt_flags; +} + +static __rte_always_inline void +sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf, + union sxe2_rx_desc *rxd) +{ + u64 qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } +} + +u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u " + "queue_id:%u", rxq->port_id, rxq->queue_id); + break; + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + + rte_prefetch0(buffer_ring[cur_idx]); + + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + + cur_mbuf = *cur_buffer; + + *cur_buffer = new_mbuf; + + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + + if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + pkt_len - + RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + first_seg->nb_segs--; + last_seg->next = NULL; + } + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + first_seg->port = rxq->port_id; + + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} + +u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; + volatile union sxe2_rx_desc *desc_ring; + volatile union sxe2_rx_desc *desc; + union sxe2_rx_desc desc_tmp; + struct rte_mbuf **buffer_ring; + struct rte_mbuf **cur_buffer; + struct rte_mbuf *cur_mbuf; + struct rte_mbuf *cur_mbuf_pay; + struct rte_mbuf *new_mbuf; + struct rte_mbuf *new_mbuf_pay = NULL; + struct rte_mbuf *first_seg; + struct rte_mbuf *last_seg; + u64 qword1; + u16 done_num; + u16 hold_num; + u16 cur_idx; + u16 pkt_len; + u16 hdr_len; + + desc_ring = rxq->desc_ring; + buffer_ring = rxq->buffer_ring; + cur_idx = rxq->processing_idx; + first_seg = rxq->pkt_first_seg; + last_seg = rxq->pkt_last_seg; + done_num = 0; + hold_num = 0; + new_mbuf = NULL; + + while (done_num < nb_pkts) { + desc = &desc_ring[cur_idx]; + qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len); + + if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1)) + break; + + if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 || + first_seg == NULL) { + new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(new_mbuf == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + break; + } + } + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); + if (unlikely(new_mbuf_pay == NULL)) { + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++; + if (new_mbuf != NULL) + rte_pktmbuf_free(new_mbuf); + new_mbuf = NULL; + break; + } + } + + hold_num++; + desc_tmp = *desc; + cur_buffer = &buffer_ring[cur_idx]; + cur_idx++; + if (unlikely(cur_idx == rxq->ring_depth)) + cur_idx = 0; + rte_prefetch0(buffer_ring[cur_idx]); + if (0 == (cur_idx & 0x3)) { + rte_prefetch0(&desc_ring[cur_idx]); + rte_prefetch0(&buffer_ring[cur_idx]); + } + cur_mbuf = *cur_buffer; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + *cur_buffer = new_mbuf; + desc->read.hdr_addr = 0; + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + } else { + if (first_seg == NULL) { + *cur_buffer = new_mbuf; + new_mbuf->next = new_mbuf_pay; + new_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + } else { + cur_mbuf_pay = cur_mbuf->next; + new_mbuf_pay->next = NULL; + new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + cur_mbuf->next = new_mbuf_pay; + desc->read.hdr_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf)); + desc->read.pkt_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay)); + cur_mbuf = cur_mbuf_pay; + } + } + new_mbuf = NULL; + if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM; + if (first_seg == NULL) { + first_seg = cur_mbuf; + first_seg->nb_segs = 1; + first_seg->pkt_len = pkt_len; + } else { + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } else { + if (first_seg == NULL) { + cur_mbuf->nb_segs = 2; + cur_mbuf->next->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1); + cur_mbuf->data_len = hdr_len; + cur_mbuf->pkt_len = hdr_len + pkt_len; + cur_mbuf->next->data_len = pkt_len; + first_seg = cur_mbuf; + cur_mbuf = cur_mbuf->next; + last_seg = cur_mbuf; + } else { + cur_mbuf->nb_segs = 1; + cur_mbuf->next = NULL; + pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1); + cur_mbuf->data_len = pkt_len; + + first_seg->pkt_len += pkt_len; + first_seg->nb_segs++; + last_seg->next = cur_mbuf; + } + } + +#ifdef RTE_ETHDEV_DEBUG_RX + + rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg)); +#endif + + if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_EOP_MASK)) { + last_seg = cur_mbuf; + continue; + } + + if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) || + unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + continue; + } + + cur_mbuf->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= RTE_ETHER_CRC_LEN; + if (pkt_len <= RTE_ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->data_len = last_seg->data_len + + pkt_len - RTE_ETHER_CRC_LEN; + last_seg->next = NULL; + } else { + cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN; + } + } else if (pkt_len == 0) { + rte_pktmbuf_free_seg(cur_mbuf); + cur_mbuf = NULL; + first_seg->nb_segs--; + last_seg->next = NULL; + } + + first_seg->port = rxq->port_id; + sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp); + + if (rxq->vsi->adapter->devargs.sw_stats_en) + sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp); + + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off)); + + rx_pkts[done_num] = first_seg; + done_num++; + + first_seg = NULL; + } + + rxq->processing_idx = cur_idx; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + sxe2_update_rx_tail(rxq, hold_num, cur_idx); + + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* [PATCH v13 10/10] net/sxe2: add vectorized Rx and Tx 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (8 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 09/10] drivers: add data path for Rx and Tx liujie5 @ 2026-05-12 11:36 ` liujie5 2026-05-13 14:45 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: liujie5 @ 2026-05-12 11:36 UTC (permalink / raw) To: stephen; +Cc: dev, Jie Liu From: Jie Liu <liujie5@linkdatatechnology.com> This patch implements the vectorized data path for the sxe2 PMD. It utilizes SIMD instructions (e.g., SSE) to process multiple packets simultaneously, significantly improving throughput for small packet processing. The implementation includes: * Vectorized Rx burst function for bulk descriptor processing. * Vectorized Tx burst function with optimized resource cleanup. * Capability flags update to reflect vectorized path support. Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com> --- drivers/net/sxe2/meson.build | 7 + drivers/net/sxe2/sxe2_ethdev.c | 35 +- drivers/net/sxe2/sxe2_ethdev.h | 1 - drivers/net/sxe2/sxe2_queue.c | 28 ++ drivers/net/sxe2/sxe2_queue.h | 3 + drivers/net/sxe2/sxe2_txrx.c | 223 +++++++--- drivers/net/sxe2/sxe2_txrx.h | 11 +- drivers/net/sxe2/sxe2_txrx_poll.h | 3 +- drivers/net/sxe2/sxe2_txrx_vec.c | 197 +++++++++ drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++ drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++ drivers/net/sxe2/sxe2_txrx_vec_sse.c | 545 ++++++++++++++++++++++++ 12 files changed, 1277 insertions(+), 83 deletions(-) create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build index b348dd71a1..3df57aee8c 100644 --- a/drivers/net/sxe2/meson.build +++ b/drivers/net/sxe2/meson.build @@ -11,6 +11,12 @@ cflags += ['-g'] deps += ['common_sxe2', 'hash','cryptodev','security'] +includes += include_directories('../../common/sxe2') + +if arch_subdir == 'x86' + sources += files('sxe2_txrx_vec_sse.c') +endif + sources += files( 'sxe2_ethdev.c', 'sxe2_cmd_chnl.c', @@ -20,6 +26,7 @@ sources += files( 'sxe2_rx.c', 'sxe2_txrx_poll.c', 'sxe2_txrx.c', + 'sxe2_txrx_vec.c', ) allow_internal_get_api = true diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c index 7e9a842eb9..b6b444a600 100644 --- a/drivers/net/sxe2/sxe2_ethdev.c +++ b/drivers/net/sxe2/sxe2_ethdev.c @@ -58,17 +58,11 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = { }; static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = { - /* SXE2_PCI_MAP_RES_INVALID */ {0, 0, 0}, - /* SXE2_PCI_MAP_RES_DOORBELL_TX */ { SXE2_TXQ_LEGACY_DBLL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL */ { SXE2_RXQ_TAIL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_DYN */ { SXE2_VF_DYN_CTL(0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */ { SXE2_VF_INT_ITR(0, 0), 0, 4}, - /* SXE2_PCI_MAP_RES_IRQ_MSIX */ { SXE2_BAR4_MSIX_CTL(0), 4, 0x10}, }; @@ -101,25 +95,6 @@ static s32 sxe2_dev_stop(struct rte_eth_dev *dev) return ret; } -static s32 sxe2_queues_start(struct rte_eth_dev *dev) -{ - s32 ret = SXE2_SUCCESS; - ret = sxe2_txqs_all_start(dev); - if (ret) { - PMD_LOG_ERR(INIT, "Failed to start tx queue."); - goto l_end; - } - - ret = sxe2_rxqs_all_start(dev); - if (ret) { - PMD_LOG_ERR(INIT, "Failed to start rx queue."); - sxe2_txqs_all_stop(dev); - } - -l_end: - return ret; -} - static s32 sxe2_dev_start(struct rte_eth_dev *dev) { s32 ret = SXE2_SUCCESS; @@ -152,7 +127,7 @@ static s32 sxe2_dev_start(struct rte_eth_dev *dev) static s32 sxe2_dev_close(struct rte_eth_dev *dev) { (void)sxe2_dev_stop(dev); - + (void)sxe2_queues_release(dev); sxe2_vsi_uninit(dev); sxe2_dev_pci_map_uinit(dev); @@ -290,13 +265,19 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = { .dev_close = sxe2_dev_close, .dev_infos_get = sxe2_dev_infos_get, + .rx_queue_start = sxe2_rx_queue_start, + .rx_queue_stop = sxe2_rx_queue_stop, + .tx_queue_start = sxe2_tx_queue_start, + .tx_queue_stop = sxe2_tx_queue_stop, .rx_queue_setup = sxe2_rx_queue_setup, - .tx_queue_setup = sxe2_tx_queue_setup, .rx_queue_release = sxe2_rx_queue_release, + .tx_queue_setup = sxe2_tx_queue_setup, .tx_queue_release = sxe2_tx_queue_release, .rxq_info_get = sxe2_rx_queue_info_get, .txq_info_get = sxe2_tx_queue_info_get, + .rx_burst_mode_get = sxe2_rx_burst_mode_get, + .tx_burst_mode_get = sxe2_tx_burst_mode_get, }; struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter, diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h index 4ef7854479..43148f9b03 100644 --- a/drivers/net/sxe2/sxe2_ethdev.h +++ b/drivers/net/sxe2/sxe2_ethdev.h @@ -11,7 +11,6 @@ #include <rte_tm_driver.h> #include <rte_io.h> -#include "sxe2_common.h" #include "sxe2_errno.h" #include "sxe2_type.h" #include "sxe2_vsi.h" diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c index 98343679f6..b1860490aa 100644 --- a/drivers/net/sxe2/sxe2_queue.c +++ b/drivers/net/sxe2/sxe2_queue.c @@ -6,6 +6,8 @@ #include "sxe2_queue.h" #include "sxe2_common_log.h" #include "sxe2_errno.h" +#include "sxe2_tx.h" +#include "sxe2_rx.h" void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, struct sxe2_drv_queue_caps *q_caps) @@ -37,3 +39,29 @@ s32 sxe2_queues_init(struct rte_eth_dev *dev) return ret; } + +s32 sxe2_queues_start(struct rte_eth_dev *dev) +{ + s32 ret = SXE2_SUCCESS; + + ret = sxe2_txqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start tx queue."); + goto l_end; + } + + ret = sxe2_rxqs_all_start(dev); + if (ret) { + PMD_LOG_ERR(INIT, "Failed to start rx queue."); + sxe2_txqs_all_stop(dev); + } +l_end: + return ret; +} + +void sxe2_queues_release(struct rte_eth_dev *dev) +{ + sxe2_all_rxqs_release(dev); + + sxe2_all_txqs_release(dev); +} diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h index 7fa22e2820..93402186c7 100644 --- a/drivers/net/sxe2/sxe2_queue.h +++ b/drivers/net/sxe2/sxe2_queue.h @@ -188,4 +188,7 @@ void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter, s32 sxe2_queues_init(struct rte_eth_dev *dev); +s32 sxe2_queues_start(struct rte_eth_dev *dev); + +void sxe2_queues_release(struct rte_eth_dev *dev); #endif diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c index a7b94e8967..8bb0880eb6 100644 --- a/drivers/net/sxe2/sxe2_txrx.c +++ b/drivers/net/sxe2/sxe2_txrx.c @@ -9,12 +9,11 @@ #include <rte_memzone.h> #include <ethdev_driver.h> #include <unistd.h> - #include "sxe2_txrx.h" #include "sxe2_txrx_common.h" +#include "sxe2_txrx_vec.h" #include "sxe2_txrx_poll.h" #include "sxe2_ethdev.h" - #include "sxe2_common_log.h" #include "sxe2_errno.h" #include "sxe2_osal.h" @@ -22,18 +21,38 @@ #if defined(RTE_ARCH_ARM64) #include <rte_cpuflags.h> #endif - +s32 __rte_cold +sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || + txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + } + *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH; +l_end: + return ret; +} static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) { struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue; s32 ret; u16 desc_idx; - if (unlikely(offset >= txq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - desc_idx = txq->next_use + offset; desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh); if (desc_idx >= txq->ring_depth) { @@ -41,19 +60,16 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) if (desc_idx >= txq->ring_depth) desc_idx -= txq->ring_depth; } - if (desc_idx == 0) desc_idx = txq->rs_thresh - 1; else desc_idx -= 1; - if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) == (txq->desc_ring[desc_idx].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK))) ret = RTE_ETH_TX_DESC_DONE; else ret = RTE_ETH_TX_DESC_FULL; - l_end: return ret; } @@ -61,13 +77,11 @@ static s32 sxe2_tx_desciptor_status(void *tx_queue, u16 offset) static inline s32 sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf) { struct rte_mbuf *m_seg = mbuf; - while (m_seg != NULL) { if (m_seg->data_len == 0) return SXE2_ERR_INVAL; m_seg = m_seg->next; } - return SXE2_SUCCESS; } @@ -79,7 +93,6 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, u64 ol_flags = 0; s32 ret = SXE2_SUCCESS; s32 i = 0; - for (i = 0; i < nb_pkts; i++) { mbuf = tx_pkts[i]; if (!mbuf) @@ -98,12 +111,10 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, rte_errno = -SXE2_ERR_INVAL; goto l_end; } - if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) { rte_errno = -SXE2_ERR_INVAL; goto l_end; } - #ifdef RTE_ETHDEV_DEBUG_TX ret = rte_validate_tx_offload(mbuf); if (ret != SXE2_SUCCESS) { @@ -116,14 +127,12 @@ u16 sxe2_tx_pkts_prepare(void *tx_queue, rte_errno = -ret; goto l_end; } - ret = sxe2_tx_mbuf_empty_check(mbuf); if (ret != SXE2_SUCCESS) { rte_errno = -ret; goto l_end; } } - l_end: return i; } @@ -132,42 +141,117 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 tx_mode_flags = 0; - + s32 ret; + u32 vec_flags; + u32 batch_flags; + RTE_SET_USED(vec_flags); PMD_INIT_FUNC_TRACE(); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_tx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) { + tx_mode_flags = vec_flags; +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) { + PMD_LOG_INFO(TX, "AVX512 is not supported in build env."); + } + if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) { + PMD_LOG_INFO(TX, "AVX2 is not supported in build env."); + } - dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; - dev->tx_pkt_burst = sxe2_tx_pkts; + if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0)) + tx_mode_flags |= SXE2_TX_MODE_VEC_SSE; +#endif + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + ret = sxe2_tx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK); + } + } + ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags); + if (ret == SXE2_SUCCESS && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH) + tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH; + } + if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) { + dev->tx_pkt_prepare = NULL; +#ifdef RTE_ARCH_X86 + if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse; + } else { + dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple; + } +#endif + } else { + if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) { + dev->tx_pkt_prepare = NULL; + dev->tx_pkt_burst = sxe2_tx_pkts_simple; + } else { + dev->tx_pkt_prepare = sxe2_tx_pkts_prepare; + dev->tx_pkt_burst = sxe2_tx_pkts; + } + } adapter->q_ctxt.tx_mode_flags = tx_mode_flags; PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.", tx_mode_flags, dev->data->port_id); } +static const struct { + eth_tx_burst_t tx_burst; + const char *info; +} sxe2_tx_burst_infos[] = { + { sxe2_tx_pkts, "Scalar" }, +#ifdef RTE_ARCH_X86 + { sxe2_tx_pkts_vec_sse, "Vector SSE" }, + { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" }, +#endif +}; + +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i; + u32 size; + size = RTE_DIM(sxe2_tx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_tx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + static s32 sxe2_rx_desciptor_status(void *rx_queue, u16 offset) { struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; s32 ret; - if (unlikely(offset >= rxq->ring_depth)) { ret = SXE2_ERR_INVAL; goto l_end; } - if (offset >= rxq->ring_depth - rxq->hold_num) { ret = RTE_ETH_RX_DESC_UNAVAIL; goto l_end; } - if (rxq->processing_idx + offset >= rxq->ring_depth) desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth]; else desc = &rxq->desc_ring[rxq->processing_idx + offset]; - if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK) ret = RTE_ETH_RX_DESC_DONE; else ret = RTE_ETH_RX_DESC_AVAIL; - l_end: PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u", offset, ret, rxq->queue_id, rxq->port_id); @@ -179,7 +263,6 @@ static s32 sxe2_rx_queue_count(void *rx_queue) struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue; volatile union sxe2_rx_desc *desc; u16 done_num = 0; - desc = &rxq->desc_ring[rxq->processing_idx]; while ((done_num < rxq->ring_depth) && (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & @@ -190,55 +273,97 @@ static s32 sxe2_rx_queue_count(void *rx_queue) else desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM; } - PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u", done_num, rxq->queue_id, rxq->port_id); - return done_num; } -static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) -{ - struct sxe2_rx_queue *rxq; - bool en = false; - u16 i; - - for (i = 0; i < dev->data->nb_rx_queues; ++i) { - rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; - if (rxq == NULL) - continue; - - if (0 != (rxq->offloads & offload)) { - en = true; - goto l_end; - } - } - -l_end: - return en; -} - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev) { struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev); u32 rx_mode_flags = 0; + s32 ret; + u32 vec_flags; PMD_INIT_FUNC_TRACE(); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + ret = sxe2_rx_vec_support_check(dev, &vec_flags); + if (ret == SXE2_SUCCESS && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + rx_mode_flags = vec_flags; +#ifdef RTE_ARCH_X86 + if ((rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)) + PMD_LOG_INFO(RX, "AVX512 is not supported in build env"); + + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1) || + (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1)) && + (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)) + PMD_LOG_INFO(RX, "AVX2 is not supported in build env"); + + if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) + rx_mode_flags |= SXE2_RX_MODE_VEC_SSE; +#endif + if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) { + ret = sxe2_rx_queues_vec_prepare(dev); + if (ret != SXE2_SUCCESS) + rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK); + } + } + } +#ifdef RTE_ARCH_X86 + if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) { + dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; + goto l_end; + } +#endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; - + goto l_end; +l_end: PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.", rx_mode_flags, dev->data->port_id); adapter->q_ctxt.rx_mode_flags = rx_mode_flags; } +static const struct { + eth_rx_burst_t rx_burst; + const char *info; +} sxe2_rx_burst_infos[] = { + { sxe2_rx_pkts_scattered, "Scalar Scattered" }, + { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" }, +#ifdef RTE_ARCH_X86 + { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" }, +#endif +}; + +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + s32 ret = SXE2_ERR_INVAL; + u32 i, size; + size = RTE_DIM(sxe2_rx_burst_infos); + for (i = 0; i < size; ++i) { + if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) { + snprintf(mode->info, sizeof(mode->info), "%s", + sxe2_rx_burst_infos[i].info); + ret = SXE2_SUCCESS; + break; + } + } + return ret; +} + void sxe2_set_common_function(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); - dev->rx_queue_count = sxe2_rx_queue_count; dev->rx_descriptor_status = sxe2_rx_desciptor_status; diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h index e6f671e3dc..8f929c4f19 100644 --- a/drivers/net/sxe2/sxe2_txrx.h +++ b/drivers/net/sxe2/sxe2_txrx.h @@ -6,16 +6,17 @@ #define SXE2_TXRX_H #include <ethdev_driver.h> #include "sxe2_queue.h" - void sxe2_set_common_function(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev, + u32 *batch_flags); u16 sxe2_tx_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - void sxe2_tx_mode_func_set(struct rte_eth_dev *dev); - void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq); - void sxe2_rx_mode_func_set(struct rte_eth_dev *dev); - +s32 sxe2_tx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode); +s32 sxe2_rx_burst_mode_get(struct rte_eth_dev *dev, + __rte_unused u16 queue_id, struct rte_eth_burst_mode *mode); #endif diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h index 4924b0f41f..67da08e58e 100644 --- a/drivers/net/sxe2/sxe2_txrx_poll.h +++ b/drivers/net/sxe2/sxe2_txrx_poll.h @@ -8,7 +8,8 @@ #include "sxe2_queue.h" u16 sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); - +u16 sxe2_tx_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); u16 sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts); diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c new file mode 100644 index 0000000000..30e1468020 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.c @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_queue.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_errno.h" + +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_rx_queue *rxq; + s32 ret = SXE2_SUCCESS; + u16 i; + *vec_flags = SXE2_RX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (!rte_is_power_of_2(rxq->ring_depth)) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC && + (rxq->ring_depth % rxq->rx_free_thresh) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload) +{ + struct sxe2_rx_queue *rxq; + bool en = false; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + if ((rxq->offloads & offload) != 0) { + en = true; + goto l_end; + } + } +l_end: + return en; +} + +static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq) +{ + const u16 mask = rxq->ring_depth - 1; + u16 i; + if (unlikely(!rxq->buffer_ring)) { + PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL." + "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id); + return; + } + if (rxq->realloc_num >= rxq->ring_depth) + return; + if (rxq->realloc_num == 0) { + for (i = 0; i < rxq->ring_depth; ++i) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } else { + for (i = rxq->processing_idx; + i != rxq->realloc_start; + i = (i + 1) & mask) { + if (rxq->buffer_ring[i]) { + rte_pktmbuf_free_seg(rxq->buffer_ring[i]); + rxq->buffer_ring[i] = NULL; + } + } + } + rxq->realloc_num = rxq->ring_depth; + memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0])); +} + +static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq) +{ + uintptr_t data; + struct rte_mbuf mbuf_def; + + memset(&mbuf_def, 0, sizeof(mbuf_def)); + mbuf_def.buf_addr = 0; + mbuf_def.nb_segs = 1; + mbuf_def.data_off = RTE_PKTMBUF_HEADROOM; + mbuf_def.port = rxq->port_id; + rte_mbuf_refcnt_set(&mbuf_def, 1); + rte_compiler_barrier(); + data = (uintptr_t)&mbuf_def.rearm_data; + rxq->mbuf_init_value = *(u64 *)data; +} + +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_rx_queue *rxq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i]; + if (rxq == NULL) { + PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i); + continue; + } + rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec; + sxe2_rx_queue_vec_init(rxq); + } + return ret; +} + +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags) +{ + struct sxe2_tx_queue *txq; + s32 ret = SXE2_SUCCESS; + u32 i; + *vec_flags = SXE2_TX_MODE_VEC_SIMPLE; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i]; + if (txq == NULL) { + ret = SXE2_ERR_INVAL; + goto l_end; + } + if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC || + txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) { + ret = SXE2_ERR_NOTSUP; + goto l_end; + } + if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0) + *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD; + } +l_end: + return ret; +} + +static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + u16 i; + + if (unlikely(txq == NULL || txq->buffer_ring == NULL)) { + PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params."); + return; + } + i = txq->next_dd - (txq->rs_thresh - 1); + buffer = txq->buffer_ring; + if (txq->next_use < i) { + for ( ; i < txq->ring_depth; ++i) { + if (buffer[i].mbuf != NULL) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + } + i = 0; + } + for (; i < txq->next_use; ++i) { + if (buffer[i].mbuf != NULL) { + rte_pktmbuf_free_seg(buffer[i].mbuf); + buffer[i].mbuf = NULL; + } + } +} + +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev) +{ + struct sxe2_tx_queue *txq = NULL; + s32 ret = SXE2_SUCCESS; + u16 i; + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) { + PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i); + continue; + } + txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec; + } + return ret; +} diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h new file mode 100644 index 0000000000..cb6a3dd3b8 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef _SXE2_TXRX_VEC_H_ +#define _SXE2_TXRX_VEC_H_ +#include <ethdev_driver.h> +#include "sxe2_queue.h" +#include "sxe2_type.h" +#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10) +#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \ + SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \ + SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \ + SXE2_RX_MODE_VEC_NEON) +#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0) +#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1) +#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2) +#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3) +#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4) +#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5) +#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10) +#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \ + SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \ + SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \ + SXE2_TX_MODE_VEC_NEON) +#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \ + RTE_ETH_TX_OFFLOAD_SECURITY | \ + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) +#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_TCP_CKSUM) +#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_TIMESTAMP | \ + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \ + RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SECURITY | \ + RTE_ETH_RX_OFFLOAD_QINQ_STRIP) +#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \ + RTE_ETH_RX_OFFLOAD_CHECKSUM | \ + RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) +#ifdef RTE_ARCH_X86 +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts); +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts); +#endif +s32 __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +s32 __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev); +s32 __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, u32 *vec_flags); +bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, u64 offload); +s32 __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev); +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h new file mode 100644 index 0000000000..c0405c9a59 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h @@ -0,0 +1,235 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#ifndef __SXE2_TXRX_VEC_COMMON_H__ +#define __SXE2_TXRX_VEC_COMMON_H__ +#include <rte_atomic.h> +#ifdef PCLINT +#include "avx_stub.h" +#endif +#include "sxe2_rx.h" +#include "sxe2_queue.h" +#include "sxe2_tx.h" +#include "sxe2_vsi.h" +#include "sxe2_ethdev.h" +#define SXE2_RX_NUM_PER_LOOP_SSE 4 +#define SXE2_RX_NUM_PER_LOOP_AVX 8 +#define SXE2_RX_NUM_PER_LOOP_NEON 4 +#define SXE2_RX_REARM_THRESH_VEC 64 +#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32 +#define SXE2_TX_RS_THRESH_MIN_VEC 32 +#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64 + +static __rte_always_inline void +sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + u16 i; + for (i = 0; i < nb_pkts; ++i) + buffer[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline s32 +sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq) +{ + struct sxe2_tx_buffer *buffer; + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC]; + s32 ret; + u32 i; + u16 rs_thresh; + u16 free_num; + if ((txq->desc_ring[txq->next_dd].wb.dd & + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) != + rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) { + ret = 0; + goto l_end; + } + rs_thresh = txq->rs_thresh; + buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)]; + mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf); + if (likely(mbuf)) { + mbuf_free_arr[0] = mbuf; + free_num = 1; + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (likely(mbuf)) { + if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) { + mbuf_free_arr[free_num] = mbuf; + free_num++; + } else { + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + mbuf_free_arr[0] = mbuf; + free_num = 1; + } + } + } + rte_mempool_put_bulk(mbuf_free_arr[0]->pool, + (void *)mbuf_free_arr, free_num); + } else { + for (i = 1; i < rs_thresh; ++i) { + mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf); + if (mbuf != NULL) + rte_mempool_put(mbuf->pool, mbuf); + } + } + txq->desc_free_num += rs_thresh; + txq->next_dd += rs_thresh; + if (txq->next_dd >= txq->ring_depth) + txq->next_dd = rs_thresh - 1; + ret = rs_thresh; +l_end: + return ret; +} + +static inline void +sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, u64 *desc_qw1) +{ + u64 offloads = mbuf->ol_flags; + u32 desc_cmd = 0; + u32 desc_offset = 0; + if (offloads & RTE_MBUF_F_TX_IP_CKSUM) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV4) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } else if (offloads & RTE_MBUF_F_TX_IPV6) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6; + desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len); + } + switch (offloads & RTE_MBUF_F_TX_L4_MASK) { + case RTE_MBUF_F_TX_TCP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_SCTP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + case RTE_MBUF_F_TX_UDP_CKSUM: + desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP; + desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len); + break; + default: + break; + } + *desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1; + *desc_qw1 |= ((u64)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT; + } + *desc_qw1 |= ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT; +} +#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \ + (((_flags) & 0x30) >> 4) + +static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq, + struct rte_mbuf *mbuf, u8 umbcast_flag) +{ + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes, + mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed); + switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) { + case SXE2_RX_DESC_STATUS_UNICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_MUTICAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1, + rte_memory_order_relaxed); + break; + case SXE2_RX_DESC_STATUS_BOARDCAST: + rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1, + rte_memory_order_relaxed); + break; + default: + break; + } + } +} + +static inline u16 +sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq, + struct rte_mbuf **mbuf_bufs, u16 mbuf_num, + u8 *split_rxe_flags, u8 *umbcast_flags) +{ + struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *tmp_seg; + u16 done_num, buf_idx; + done_num = 0; + for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) { + if (last_seg) { + last_seg->next = mbuf_bufs[buf_idx]; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + first_seg->nb_segs++; + first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len; + last_seg = last_seg->next; + if (split_rxe_flags[buf_idx] == 0) { + first_seg->hash = last_seg->hash; + first_seg->vlan_tci = last_seg->vlan_tci; + first_seg->ol_flags = last_seg->ol_flags; + first_seg->pkt_len -= rxq->crc_len; + if (last_seg->data_len > rxq->crc_len) { + last_seg->data_len -= rxq->crc_len; + } else { + tmp_seg = first_seg; + first_seg->nb_segs--; + while (tmp_seg->next != last_seg) + tmp_seg = tmp_seg->next; + tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len); + tmp_seg->next = NULL; + rte_pktmbuf_free_seg(last_seg); + last_seg = NULL; + } + done_pkts[done_num++] = first_seg; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]); + first_seg = NULL; + last_seg = NULL; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + continue; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free(first_seg); + first_seg = NULL; + last_seg = NULL; + continue; + } + } else { + if (split_rxe_flags[buf_idx] == 0) { + done_pkts[done_num++] = mbuf_bufs[buf_idx]; + sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx], + umbcast_flags[buf_idx]); + continue; + } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) { + first_seg = mbuf_bufs[buf_idx]; + last_seg = first_seg; + mbuf_bufs[buf_idx]->data_len += rxq->crc_len; + mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len; + } else { + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes, + mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN, + rte_memory_order_relaxed); + rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]); + continue; + } + } + } + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *))); + return done_num; +} +#endif diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c new file mode 100644 index 0000000000..8cf11849d6 --- /dev/null +++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c @@ -0,0 +1,545 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd. + */ + +#include <ethdev_driver.h> +#include <rte_bitops.h> +#include <rte_malloc.h> +#include <rte_mempool.h> +#include <rte_vect.h> +#include "rte_common.h" +#include "sxe2_ethdev.h" +#include "sxe2_common_log.h" +#include "sxe2_queue.h" +#include "sxe2_txrx_vec.h" +#include "sxe2_txrx_vec_common.h" +#include "sxe2_vsi.h" + +static __rte_always_inline void +sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc, + struct rte_mbuf *pkt, + u64 desc_cmd, bool with_offloads) +{ + __m128i data_desc; + u64 desc_qw1; + u32 desc_offset; + desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA | + ((u64)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT | + ((u64)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT); + desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len); + desc_qw1 |= ((u64)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT; + if (with_offloads) + sxe2_tx_desc_fill_offloads(pkt, &desc_qw1); + data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc); +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + volatile union sxe2_tx_data_desc *desc; + struct sxe2_tx_buffer *buffer; + u16 next_use; + u16 res_num; + u16 tx_num; + u16 i; + if (txq->desc_free_num < txq->free_thresh) + (void)sxe2_tx_bufs_free_vec(txq); + nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts); + if (unlikely(nb_pkts == 0)) { + PMD_LOG_DEBUG(TX, "Tx pkts sse batch: may not enough free desc, " + "free_desc=%u, need_tx_pkts=%u", + txq->desc_free_num, nb_pkts); + goto l_end; + } + tx_num = nb_pkts; + next_use = txq->next_use; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + txq->desc_free_num -= nb_pkts; + res_num = txq->ring_depth - txq->next_use; + if (tx_num >= res_num) { + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num); + for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++, + (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS), + with_offloads); + tx_num -= res_num; + next_use = 0; + txq->next_rs = txq->rs_thresh - 1; + desc = &txq->desc_ring[next_use]; + buffer = &txq->buffer_ring[next_use]; + } + sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num); + for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) { + sxe2_tx_desc_fill_one_sse(desc, *tx_pkts, + SXE2_TX_DATA_DESC_CMD_EOP, + with_offloads); + } + next_use += tx_num; + if (next_use > txq->next_rs) { + txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |= + rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK); + txq->next_rs += txq->rs_thresh; + } + txq->next_use = next_use; + SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use); + PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u", + txq->port_id, txq->queue_id, next_use, nb_pkts); +l_end: + return nb_pkts; +} + +static __rte_always_inline u16 +sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq, + struct rte_mbuf **tx_pkts, + u16 nb_pkts, bool with_offloads) +{ + u16 tx_done_num = 0; + u16 tx_once_num; + u16 tx_need_num; + while (nb_pkts) { + tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh); + tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq, + tx_pkts + tx_done_num, + tx_need_num, with_offloads); + nb_pkts -= tx_once_num; + tx_done_num += tx_once_num; + if (tx_once_num < tx_need_num) + break; + } + return tx_done_num; +} + +u16 sxe2_tx_pkts_vec_sse_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, false); +} +u16 sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) +{ + return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue, + tx_pkts, nb_pkts, true); +} + +static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + struct rte_mbuf *mbuf0, *mbuf1; + __m128i dma_addr0, dma_addr1; + __m128i virt_addr0, virt_addr1; + __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, + RTE_PKTMBUF_HEADROOM); + s32 ret; + u16 i; + u16 new_tail; + buffer = &rxq->buffer_ring[rxq->realloc_start]; + desc = &rxq->desc_ring[rxq->realloc_start]; + ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer, + SXE2_RX_REARM_THRESH_VEC); + if (ret != 0) { + PMD_LOG_INFO(RX, "Rx mbuf vec alloc failed port_id=%u " + "queue_id=%u", rxq->port_id, rxq->queue_id); + if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) { + dma_addr0 = _mm_setzero_si128(); + for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) { + buffer[i] = &rxq->fake_mbuf; + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read), + dma_addr0); + } + } + rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed += + SXE2_RX_REARM_THRESH_VEC; + goto l_end; + } + for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) { + mbuf0 = buffer[0]; + mbuf1 = buffer[1]; +#if RTE_IOVA_IN_MBUF + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif + virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr); + virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr); +#if RTE_IOVA_IN_MBUF + dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1); +#else + dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0); + dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1); +#endif + dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room); + dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1); + } + rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC; + if (rxq->realloc_start >= rxq->ring_depth) + rxq->realloc_start = 0; + rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC; + new_tail = (rxq->realloc_start == 0) ? + (rxq->ring_depth - 1) : (rxq->realloc_start - 1); + SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail); +l_end: + return; +} + +static __rte_always_inline __m128i +sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4]) +{ + __m128i descs_tmp1, descs_tmp2; + __m128i descs_fnav_vld; + __m128i v_zeros, v_ffff, v_u32_one; + __m128i m_flags; + const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID); + descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]); + descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]); + descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2); + descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26); + descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31); + v_zeros = _mm_setzero_si128(); + v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros); + v_u32_one = _mm_srli_epi32(v_ffff, 31); + m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one); + m_flags = _mm_and_si128(m_flags, fdir_flags); + return m_flags; +} + +static __rte_always_inline void +sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq, + volatile union sxe2_rx_desc *desc __rte_unused, + __m128i descs_arr[4], + struct rte_mbuf **rx_pkts) +{ + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value); + __m128i rearm_arr[4]; + __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags; + const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04, + 0x00001C04, 0x00001C04); + const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000, + 0x20000000, 0x20000000); + const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0, RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + 0, 0, 0, 0); + const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH, + 0, 0, 0, 0); + const __m128i cksum_flags = + _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_BAD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1), + ((RTE_MBUF_F_RX_L4_CKSUM_GOOD | + RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1)); + const __m128i cksum_mask = + _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, + RTE_MBUF_F_RX_IP_CKSUM_MASK | + RTE_MBUF_F_RX_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK | + RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD); + const __m128i vlan_mask = + _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN | + RTE_MBUF_F_RX_VLAN_STRIPPED, + RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED); + flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]); + tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]); + tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags); + tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags); + tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask); + tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask); + tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo); + flags = _mm_and_si128(tmp_flags, vlan_mask); + tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10); + tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo); + tmp_flags = _mm_slli_epi32(tmp_flags, 1); + tmp_flags = _mm_and_si128(tmp_flags, cksum_mask); + flags = _mm_or_si128(flags, tmp_flags); + tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27); + tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi); + flags = _mm_or_si128(flags, tmp_flags); +#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC + if (rxq->fnav_enable) { + __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr); + flags = _mm_or_si128(flags, tmp_fnav_flags); + rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id; + rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id; + rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id; + rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id; + } +#endif + rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30); + rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30); + rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30); + rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) != + offsetof(struct rte_mbuf, rearm_data) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != + RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]); + _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]); +} + +static inline u16 +sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts, u8 *split_rxe_flags, + u8 *umbcast_flags) +{ + volatile union sxe2_rx_desc *desc; + struct rte_mbuf **buffer; + __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE]; + __m128i staterr, sterr_tmp1, sterr_tmp2; + __m128i pmbuf0; + __m128i ptype_all; +#ifdef RTE_ARCH_X86_64 + __m128i pmbuf1; +#endif + u32 i; + u32 bit_num; + u16 done_num = 0; + const u32 *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const __m128i crc_adjust = + _mm_set_epi16(0, 0, 0, + -rxq->crc_len, + 0, -rxq->crc_len, + 0, 0); + const __m128i rvp_shuf_mask = + _mm_set_epi8(7, 6, 5, 4, + 3, 2, + 13, 12, + 0XFF, 0xFF, 13, 12, + 0xFF, 0xFF, 0xFF, 0xFF); + const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL, + 0x0000000100000001LL); + const __m128i eop_mask = _mm_slli_epi32(dd_mask, + SXE2_RX_DESC_STATUS_EOP_SHIFT); + const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL, + 0x0000208000002080LL); + const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x04, 0x0C, + 0x00, 0x08); + const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0, + SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12); + desc = &rxq->desc_ring[rxq->processing_idx]; + rte_prefetch0(desc); + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE); + if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC) + sxe2_rx_queue_rearm_sse(rxq); + if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & + SXE2_RX_DESC_STATUS_DD_MASK) == 0) + goto l_end; + buffer = &rxq->buffer_ring[rxq->processing_idx]; + for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE, + desc += SXE2_RX_NUM_PER_LOOP_SSE) { + pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i])); + descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3)); + rte_compiler_barrier(); + _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0); +#ifdef RTE_ARCH_X86_64 + pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]); +#endif + descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2)); + rte_compiler_barrier(); + descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1)); + rte_compiler_barrier(); + descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc)); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1); +#endif + if (split_rxe_flags) { + rte_mbuf_prefetch_part2(rx_pkts[i]); + rte_mbuf_prefetch_part2(rx_pkts[i + 1]); + rte_mbuf_prefetch_part2(rx_pkts[i + 2]); + rte_mbuf_prefetch_part2(rx_pkts[i + 3]); + } + rte_compiler_barrier(); + mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask); + mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask); + mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask); + mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask); + sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]); + sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]); + sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts); + mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust); + mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust); + mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust); + mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust); + staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2); + ptype_all = _mm_and_si128(staterr, ptype_mask); + _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1, + mbuf_arr[3]); + _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1, + mbuf_arr[2]); + if (umbcast_flags != NULL) { + const __m128i umbcast_mask = + _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK, + SXE2_RX_DESC_STATUS_UMBCAST_MASK); + const __m128i umbcast_shuf_mask = + _mm_set_epi8(0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0xFF, 0xFF, + 0x07, 0x0F, + 0x03, 0x0B); + __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask); + umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask); + *(s32 *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits); + umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + if (split_rxe_flags != NULL) { + __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask); + __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask); + rxe_bits = _mm_srli_epi32(rxe_bits, 7); + eop_bits = _mm_or_si128(eop_bits, rxe_bits); + eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); + *(s32 *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits); + split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE; + } + staterr = _mm_and_si128(staterr, dd_mask); + staterr = _mm_packs_epi32(staterr, _mm_setzero_si128()); + _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1, + mbuf_arr[1]); + _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1, + mbuf_arr[0]); + rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)]; + rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)]; + rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)]; + rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)]; + bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr)); + done_num += bit_num; + if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE)) + break; + } + rxq->processing_idx += done_num; + rxq->processing_idx &= (rxq->ring_depth - 1); + rxq->realloc_num += done_num; + PMD_LOG_DEBUG(RX, "port_id=%u queue_id=%u last_id=%u recv_pkts=%d", + rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num); +l_end: + return done_num; +} +static __rte_always_inline u16 +sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + const u64 *split_rxe_flags64; + u8 split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u8 umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0}; + u16 rx_done_num; + u16 rx_pkt_done_num; + rx_pkt_done_num = 0; + if (rxq->vsi->adapter->devargs.sw_stats_en) { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, umbcast_flags); + } else { + rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts, + nb_pkts, split_rxe_flags, NULL); + } + if (rx_done_num == 0) + goto l_end; + if (!rxq->vsi->adapter->devargs.sw_stats_en) { + split_rxe_flags64 = (u64 *)split_rxe_flags; + if (rxq->pkt_first_seg == NULL && + split_rxe_flags64[0] == 0 && + split_rxe_flags64[1] == 0 && + split_rxe_flags64[2] == 0 && + split_rxe_flags64[3] == 0) { + rx_pkt_done_num = rx_done_num; + goto l_end; + } + if (rxq->pkt_first_seg == NULL) { + while (rx_pkt_done_num < rx_done_num && + split_rxe_flags[rx_pkt_done_num] == 0) + rx_pkt_done_num++; + if (rx_pkt_done_num == rx_done_num) + goto l_end; + rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num]; + } + } + rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num], + rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num], + &umbcast_flags[rx_pkt_done_num]); +l_end: + return rx_pkt_done_num; +} + +u16 sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue, + struct rte_mbuf **rx_pkts, u16 nb_pkts) +{ + u16 done_num = 0; + u16 once_num; + while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) { + once_num = + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, + SXE2_RX_PKTS_BURST_BATCH_NUM_VEC); + done_num += once_num; + nb_pkts -= once_num; + if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) + goto l_end; + } + done_num += + sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue, + rx_pkts + done_num, nb_pkts); +l_end: + return done_num; +} -- 2.47.3 ^ permalink raw reply related [flat|nested] 143+ messages in thread
* Re: [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 ` (9 preceding siblings ...) 2026-05-12 11:36 ` [PATCH v13 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-13 14:45 ` Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-05-13 14:45 UTC (permalink / raw) To: liujie5; +Cc: dev On Tue, 12 May 2026 19:36:49 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > This patch set addresses the feedback received on the v10 submission > for the sxe2 PMD. The primary focus is on fixing vector path selection, > ensuring memory safety during mbuf initialization, and cleaning up > redundant logic in the configuration functions. > > v13 Changes: > - Fixed vector Rx burst function being overwritten by scalar selection. > - Refactored Rx/Tx mode set functions to seed flags from caps first,eliminating tautological checks. > - Added memset for mbuf_def in vector init to avoid uninitialized reads. > - Converted pci_map_addr_info to designated initializers. > - Removed dead Windows-only code in meson.build. > - Added NULL checks for mbuf free for driver-wide consistency. > - Updated burst_mode_get to accurately report AVX paths. > - Adjusted SXE2_ETH_OVERHEAD to match actual VLAN capabilities. > > Jie Liu (10): > mailmap: add Jie Liu > doc: add sxe2 guide and release notes > common/sxe2: add sxe2 basic structures > drivers: add base driver skeleton > drivers: add base driver probe skeleton > drivers: support PCI BAR mapping > common/sxe2: add ioctl interface for DMA map and unmap > net/sxe2: support queue setup and control > drivers: add data path for Rx and Tx > net/sxe2: add vectorized Rx and Tx > > .mailmap | 1 + > doc/guides/nics/features/sxe2.ini | 30 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/sxe2.rst | 34 + > doc/guides/rel_notes/release_26_07.rst | 4 + > drivers/common/sxe2/meson.build | 15 + > drivers/common/sxe2/sxe2_common.c | 685 +++++++++++++++ > drivers/common/sxe2/sxe2_common.h | 86 ++ > drivers/common/sxe2/sxe2_common_log.h | 83 ++ > drivers/common/sxe2/sxe2_errno.h | 110 +++ > drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ > drivers/common/sxe2/sxe2_internal_ver.h | 33 + > drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ > drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ > drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ > drivers/common/sxe2/sxe2_osal.h | 584 +++++++++++++ > drivers/common/sxe2/sxe2_type.h | 60 ++ > drivers/meson.build | 1 + > drivers/net/meson.build | 1 + > drivers/net/sxe2/meson.build | 32 + > drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ > drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + > drivers/net/sxe2/sxe2_drv_cmd.h | 389 +++++++++ > drivers/net/sxe2/sxe2_ethdev.c | 941 ++++++++++++++++++++ > drivers/net/sxe2/sxe2_ethdev.h | 315 +++++++ > drivers/net/sxe2/sxe2_irq.h | 49 ++ > drivers/net/sxe2/sxe2_queue.c | 67 ++ > drivers/net/sxe2/sxe2_queue.h | 194 +++++ > drivers/net/sxe2/sxe2_rx.c | 579 +++++++++++++ > drivers/net/sxe2/sxe2_rx.h | 34 + > drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ > drivers/net/sxe2/sxe2_tx.h | 32 + > drivers/net/sxe2/sxe2_txrx.c | 372 ++++++++ > drivers/net/sxe2/sxe2_txrx.h | 22 + > drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ > drivers/net/sxe2/sxe2_txrx_poll.c | 945 +++++++++++++++++++++ > drivers/net/sxe2/sxe2_txrx_poll.h | 17 + > drivers/net/sxe2/sxe2_txrx_vec.c | 197 +++++ > drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++ > drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++ > drivers/net/sxe2/sxe2_txrx_vec_sse.c | 545 ++++++++++++ > drivers/net/sxe2/sxe2_vsi.c | 212 +++++ > drivers/net/sxe2/sxe2_vsi.h | 205 +++++ > 43 files changed, 9759 insertions(+) > create mode 100644 doc/guides/nics/features/sxe2.ini > create mode 100644 doc/guides/nics/sxe2.rst > create mode 100644 drivers/common/sxe2/meson.build > create mode 100644 drivers/common/sxe2/sxe2_common.c > create mode 100644 drivers/common/sxe2/sxe2_common.h > create mode 100644 drivers/common/sxe2/sxe2_common_log.h > create mode 100644 drivers/common/sxe2/sxe2_errno.h > create mode 100644 drivers/common/sxe2/sxe2_host_regs.h > create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h > create mode 100644 drivers/common/sxe2/sxe2_osal.h > create mode 100644 drivers/common/sxe2/sxe2_type.h > create mode 100644 drivers/net/sxe2/meson.build > create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c > create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h > create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h > create mode 100644 drivers/net/sxe2/sxe2_ethdev.c > create mode 100644 drivers/net/sxe2/sxe2_ethdev.h > create mode 100644 drivers/net/sxe2/sxe2_irq.h > create mode 100644 drivers/net/sxe2/sxe2_queue.c > create mode 100644 drivers/net/sxe2/sxe2_queue.h > create mode 100644 drivers/net/sxe2/sxe2_rx.c > create mode 100644 drivers/net/sxe2/sxe2_rx.h > create mode 100644 drivers/net/sxe2/sxe2_tx.c > create mode 100644 drivers/net/sxe2/sxe2_tx.h > create mode 100644 drivers/net/sxe2/sxe2_txrx.c > create mode 100644 drivers/net/sxe2/sxe2_txrx.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c > create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c > create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c > create mode 100644 drivers/net/sxe2/sxe2_vsi.c > create mode 100644 drivers/net/sxe2/sxe2_vsi.h > Still lots of AI review feedback: Summary of v13 sxe2 PMD review Most of the mechanical issues from v10 (carried into v11/v12) have been addressed in v13: duplicate tx_queue_offload_capa is gone, RTE_LOG_REGISTER_SUFFIX no longer registers Tx as "rx", the fopen("/var/log/...") path, SXE2_DPDK_DEBUG, FPGA_VER_ASIC and the debug flags in meson are gone, rx_queue_release / tx_queue_release are in dev_ops, and mmap() is correctly checked against MAP_FAILED. What still needs to be fixed: Errors ------ Patch 06/10 - Inverted error checks in sxe2_dev_pci_map_init(). sxe2_dev_pci_res_seg_map() returns 0 on success, negative on failure (the definition is in this same patch). Five consecutive call sites use "if (!ret)" which is true on success, then log "Failed to map..." and goto cleanup, unmapping the BAR segment just mapped and returning ret=0 so the caller thinks probe succeeded. On a real failure, the check is false and the code proceeds as if the mapping had succeeded. Same bug flagged in v10, unchanged in v11/v12/v13. Patch 04/10 - rte_ticketlock_t (busy-wait FIFO lock) held across blocking ioctl() to the kernel driver in sxe2_drv_cmd_exec(), sxe2_drv_dev_handshark(), and the DMA map/unmap entry points added in patch 07. Other lcores trying to acquire the lock burn CPU spinning until the kernel returns. Use pthread_mutex_t; the lock is in process-private memory so PTHREAD_PROCESS_SHARED is not needed. Patch 06/10 - Non-ASCII characters in source comments. In sxe2_net_map_addr_info_pf[]: "/* SXE2_PCI_MAP_RES_IRQ_ITR(默认使用ITR0) */". DPDK source should be ASCII English. Warnings -------- Patch 02/10 - Feature matrix overclaims. sxe2.ini lists "MTU update = Y" but no .mtu_set is registered, and "Free Tx mbuf on demand = Y" but no .tx_done_cleanup is registered. Either implement the callbacks or fix the matrix. Patch 09/10 - Dead "if (ret != SXE2_SUCCESS)" check in the secondary-process branch of sxe2_dev_init() after two calls to void-returning functions (sxe2_rx_mode_func_set, sxe2_tx_mode_func_set). ret cannot change, so the error log never fires. Patch 03/10 - Driver-private kernel-style aliases in sxe2_type.h: u8/s8/u16/.../s64 typedefs, "#define STATIC static", "#define __le16 u16", "#define __be16 u16". DPDK convention is to use the standard names directly. The __le*/__be* defines also erase the endianness annotation rather than preserving it. Patch 03/10 - sxe2_osal.h reinvents infrastructure that already exists in DPDK: BIT/BIT_ULL/GENMASK/set_bit/test_bit/bitmap_weight (use rte_bitops.h / rte_bitmap.h), LIST_FOR_EACH_ENTRY and friends (use sys/queue.h TAILQ_* or rte_tailq), COMPILER_BARRIER (use rte_compiler_barrier), sxe2_*_lock wrappers around rte_spinlock_*, and udelay/mdelay/msleep aliases for rte_delay_us. The kernel idioms __iomem and IS_ERR/IS_ERR_VALUE do not belong in a userspace PMD. Patch 03/10 - sxe2_errno.h defines a parallel SXE2_ERR_* namespace that aliases every POSIX errno. The rest of the driver mixes both spellings (-EFAULT and SXE2_ERR_PERM appear in the same file). Pick one and use it everywhere. Patch 08/10 - Redundant "queue_idx >= nb_*_queues" guards at the top of sxe2_rx_queue_setup / sxe2_tx_queue_setup / queue_start / queue_stop. The ethdev layer validates queue_idx before calling the PMD. Patches 04 and 06 - Typos in identifiers and log strings: sxe2_commoin_inited (commoin -> common), sxe2_drv_dev_handshark (handshark -> handshake; the ioctl SXE2_COM_CMD_HANDSHAKE is correct, only the wrapper is mistyped), "kernel reseted, need restart app." (reseted -> was reset). Info ---- Patch 09/10 - sxe2_rx_mode_func_set always picks a scattered Rx burst (split or non-split) regardless of dev->data->scattered_rx. There is no plain single-segment fast path. Patch 10/10 - The vector capability probe in sxe2_rx_mode_func_set is gated on RTE_PROC_PRIMARY, but a secondary process calls the same function and lands on the scalar path. Since rx_pkt_burst is per-rte_eth_dev and re-assigned by whichever process calls last, the resulting mode depends on attach ordering. Patch 09/10 - Dead "goto l_end_of_tx;" immediately before the "l_end_of_tx:" label in sxe2_tx_pkts(). Patch 10/10 - The "AVX512/AVX2 is not supported in build env" log lines fire based on CPU capability, not on any build-time absence, so the message is misleading. The v10-flagged PCI map inverted-check bug is still present after three respins. Longer report available if needed. ^ permalink raw reply [flat|nested] 143+ messages in thread
* Re: [PATCH v10 00/10] Add Linkdata sxe2 driver 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 ` (9 preceding siblings ...) 2026-05-06 11:35 ` [PATCH v10 10/10] net/sxe2: add vectorized " liujie5 @ 2026-05-07 0:23 ` Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-05-07 0:23 UTC (permalink / raw) To: liujie5; +Cc: dev On Wed, 6 May 2026 19:35:46 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > V10: > - Addressed AI comments > > Jie Liu (10): > mailmap: add Jie Liu > doc: add sxe2 guide and release notes > drivers: add sxe2 basic structures > common/sxe2: add base driver skeleton > drivers: add base driver probe skeleton > drivers: support PCI BAR mapping > common/sxe2: add ioctl interface for DMA map and unmap > net/sxe2: support queue setup and control > drivers: add data path for Rx and Tx > net/sxe2: add vectorized Rx and Tx My comments first: - drivers going upstream are expected to be production quality. Having #ifdef for driver specific stuff indicates to me either that the code is not ready, existing DPDK debug infrastructure is insufficient, or that the author is unwilling to "kill his darlings". You may not get the last reference, but in writing "kill your darlings" refers to the problem where author becomes too attached to characters or scenes and is unwilling to make the deep edits necessary to be good writing. - slicing up out of tree code base leads to messy code. Don't have all the other OS code still in base. As usual AI review has lots, lots more to say. Don't treat it all as absolute, but it looks like it found lots of real bugs and things like mismatch of doc's and features. If you want to run AI review yourself see AGENTS.md file pending in patchwork. Now for the long AI review: Review of v10 sxe2 PMD series ============================= Overall comments ---------------- The series adds a new PMD for the Linkdata sxe2 family. Several patches in this series contain serious correctness bugs that would prevent the driver from working with real hardware, including inverted error checks that take the failure path on success, an uninitialised mbuf->next dereference in an error path, a use-after-free / double-free in the buffer-split scatter Rx, and a vector-mode selection that is unconditionally overwritten by the scalar selection that follows it. These need to be fixed before the driver can be considered ready for merge. Beyond those correctness issues, two structural problems in this series need addressing before a merge can be considered. Both are fundamental enough that I want to call them out at the top. 1. The driver opens its own file under /var/log and routes its log lines through a private FILE *. drivers/common/sxe2/sxe2_common_log.c does fopen() on /var/log/sxe2pmd.log.<timestamp> and then every PMD_LOG_* call goes through a wrapper that does rte_openlog_stream(g_sxe2_common_log_fp) before the log line and rte_openlog_stream(NULL) after. This is not acceptable for a DPDK driver. rte_openlog_stream is a process-global setting; flipping it on every log call means any other PMD or application code in the same process that has set its own log stream gets clobbered every time sxe2 logs anything. /var/log is a privileged path that ordinary DPDK applications cannot write to. The FILE * is opened once and never closed, so it leaks across the lifetime of the process. The whole code path is also gated on SXE2_DPDK_DEBUG, which the driver's meson.build defines unconditionally - so the debug-only path is the production path. DPDK drivers log via RTE_LOG / RTE_LOG_LINE / RTE_LOG_LINE_PREFIX into the rte_log infrastructure. The application owns the log stream, not the PMD. Please remove the file-logging path entirely and use rte_log directly; if a per-thread/per-port prefix is wanted, use RTE_LOG_LINE_PREFIX. The 350+ lines of PMD_LOG_*/LOG_*/LOG_DEV_*/LOG_MSG_* macros in sxe2_common_log.h can collapse to a handful of one-line wrappers over RTE_LOG_LINE. 2. The driver is being submitted as a slice of a larger out-of-tree codebase, with the slicing done at compile time via #ifdef. sxe2_drv_cmd.h, sxe2_ioctl_chnl.h and others contain the pattern #ifdef SXE2_DPDK_DRIVER #include "sxe2_type.h" ... #endif #ifdef SXE2_LINUX_DRIVER #ifdef __KERNEL__ #include <linux/types.h> #include <linux/if_ether.h> #endif #endif and sxe2_osal.h carries a #ifndef ladder around BIT_ULL, BITS_PER_LONG, DIV_ROUND_UP, TAILQ_FOREACH_SAFE and friends so the same header can be included from both sides. On top of that there is a SXE2_DPDK_DEBUG knob, a SXE2_DPDK_DEBUG_RXTX_LOG knob, a SXE2_TEST knob, an FPGA_VER_ASIC knob (defined unconditionally in meson.build, so it is dead - but its existence implies multiple build flavours), a RTE_LIBRTE_SXE2_16BYTE_RX_DESC knob that selects descriptor format at compile time, and a PCLINT knob. The result is that nobody upstream can audit "what gets built" without picking a specific combination of flags, and the variants that aren't built by default will rot. DPDK is not a kernel-driver-sharing host. The expectation is that what is submitted is the DPDK driver, written in DPDK style, building exactly one binary per arch. Please: - Remove the SXE2_LINUX_DRIVER / __KERNEL__ branches from every header. The kernel driver lives elsewhere; ship its headers there. - Remove the SXE2_DPDK_DRIVER guard - it is the only thing being built here. - Remove FPGA_VER_ASIC (unused) and PCLINT (lint annotations should not be in shipping source). - Make RTE_LIBRTE_SXE2_16BYTE_RX_DESC a runtime decision or just pick one format. Compile-time descriptor format selection is a pattern other PMDs have moved away from. - SXE2_DPDK_DEBUG / SXE2_DPDK_DEBUG_RXTX_LOG / SXE2_TEST should be removed; if a debug counter or extra log line is genuinely useful, gate it on the existing RTE_ETHDEV_DEBUG_RX/TX or on a runtime devarg, not on a build-time flag the driver defines for itself. With those gone, the same set of files will be cleaner, smaller, and a lot easier to review. The driver also reinvents a fair amount of infrastructure that already exists in DPDK - logging (above), the entire linux-kernel-style bit/bitmap/list layer in sxe2_osal.h (BIT/GENMASK/set_bit/test_bit/LIST_FOR_EACH_ENTRY/COMPILER_BARRIER/ sxe2_lock/etc.), and a parallel sxe2_errno.h that maps every errno to a SXE2_ERR_* alias. Please use the DPDK equivalents (rte_bitops.h, rte_bitmap, sys/queue.h or rte_tailq, rte_compiler_barrier, rte_spinlock_t, plain -errno) directly. Severity legend: Error = correctness/build; Warning = should fix; Info = consider. Patch 02/10 doc: add sxe2 guide and release notes -------------------------------------------------- Warning: doc/guides/nics/features/sxe2.ini lists only "Queue start/stop" and "Linux", but sxe2_dev_infos_get in patch 5 advertises VLAN_STRIP, KEEP_CRC, SCATTER, RSS_HASH, LRO, BUFFER_SPLIT, multiple checksum offloads, MBUF_FAST_FREE, TSO, tunnel TSO, etc. The features matrix needs to be updated to match what dev_info reports (and to drop entries the v10 patches do not yet implement). Warning: doc/guides/nics/sxe2.rst states "this driver only deals with virtual memory addresses", but sxe2_drv_dev_dma_map() in patch 7 has an explicit RTE_IOVA_PA branch and supports PA mode when IOMMU is absent. Either remove the PA path or update the doc. Info: sxe2.ini ends without a newline ("\ No newline at end of file" in the diff). Info: in sxe2.rst, "are supported" sentence is missing a trailing period; reST renders the next blank line as a section break which is probably not intended. Patch 03/10 drivers: add sxe2 basic structures ----------------------------------------------- Error: drivers/common/sxe2/meson.build enables `-DSXE2_DPDK_DEBUG` unconditionally, which turns on the fopen()/private-stream logging path in sxe2_common_log.c. See "Overall comments" - that path needs to go away entirely, and the build flag needs to go with it. Error: sxe2_common_log.h's PMD_LOG_NOTICE / PMD_LOG_WARN / PMD_LOG_ERR / PMD_LOG_CRIT / PMD_LOG_ALERT / PMD_LOG_EMERG (and the matching PMD_DEV_LOG_* variants) emit the same log line twice when SXE2_DPDK_DEBUG is on: #define PMD_LOG_ERR(logtype, fmt, ...) \ do { \ SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ sxe2_common_log_stream_open();\ SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__); \ sxe2_common_log_stream_close();\ } while (0) The first SXE2_PMD_LOG goes to whatever rte_log stream was in effect; the second goes to the private file. This is almost certainly a bug independent of (1) above. When the file-logging path is removed, just call SXE2_PMD_LOG once. Warning: drivers/common/sxe2/meson.build uses `sources = files(...)` instead of `sources += files(...)`. This works today because nothing else sets `sources` for this subdir, but the rest of DPDK uses `+=` and any future infra change that sets a source from outside will be silently dropped. Warning: drivers/common/sxe2/sxe2_type.h does `typedef char s8`. Plain `char` is signed on x86 and unsigned on arm64/ppc64 by default. s8 is then used as a string buffer (e.g. `s8 g_sxe2_common_log_filename[]`, `s8 stime[40]`, `s8 drv_name[32]`) and passed to snprintf/fopen/strerror. Under -Werror with -Wpointer-sign this will fail to build on platforms where char is unsigned. Use `int8_t` for s8 (and only use it where you truly want a signed 8-bit integer), and use `char` for text buffers. Warning: sxe2_type.h also defines unused S8/S16/S32 typedefs. Drop them. Warning: sxe2_osal.h defines GENMASK as `(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (__BITS_PER_LONG - 1 - (h))))`. When h == __BITS_PER_LONG - 1 the right operand becomes a shift by 0 which is fine, but the macro also relies on `1UL` being at least __BITS_PER_LONG bits; on a 32-bit build any GENMASK with l >= 32 invokes UB. Use RTE_GENMASK64 from rte_bitops.h. Warning: sxe2_osal.h sxe2_swap_u16() uses an arithmetic swap (a += b; b = a - b; a -= b). This is harder to read than a tmp variable, computes the same result, and offers no benefit. Use a tmp. Info: sxe2_common_log.h `STIME` macro is defined but unused. Info: sxe2_internal_ver.h SXE2_MK_VER_MAJOR/MINOR rely on operator precedence inside macro expansion - parenthesise the macro arguments. Patch 04/10 common/sxe2: add base driver skeleton -------------------------------------------------- Error: drivers/common/sxe2/sxe2_common.c sxe2_common_pci_id_table_update() silently returns SXE2_SUCCESS when calloc() fails: s32 ret = SXE2_SUCCESS; ... updated_table = calloc(num_ids, sizeof(*updated_table)); if (!updated_table) { PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table"); goto l_end; } ... l_end: return ret; ret is never set to an error value before the goto, so the caller (sxe2_common_pci_init / sxe2_common_driver_on_register_pci) proceeds as if the PCI ID table were updated. Set ret = SXE2_ERR_NO_MEMORY (or -ENOMEM) before the goto. Warning: sxe2_ioctl_chnl.c first include line is indented with a leading space (` #include <sys/types.h>`), the rest of the includes are not indented. Fixes the diff indentation noise as well. Warning: sxe2_drv_dev_close() uses `if (fd > 0)` to decide whether to close. fd 0 is a legitimate file descriptor (stdin in the host, but also a valid value returned by open() in unusual cases). Use `if (fd >= 0)`. Warning: sxe2_common_pci_remove() frees cdev->kvargs with `free()`, but if the kvargs were never allocated (rte_dev->devargs->args was NULL), cdev->kvargs is also NULL and the free() is fine. However the comment "Fail to get remove device" in sxe2_common_pci_dma_unmap (patch 7) is the unmap path, not remove - copy/paste from the remove function. Info: sxe2_kvargs_process() uses `(*handler)(...)` syntax; plain `handler(...)` reads better. Patch 05/10 drivers: add base driver probe skeleton ---------------------------------------------------- Error: drivers/net/sxe2/sxe2_ethdev.c registers the Tx logtype with the wrong suffix (twice): RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, DEBUG); ... RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, rx, NOTICE); Both should be `tx`. As written, all Tx logs flow into the rx logtype name, and rte_log_register_pattern won't be able to address them separately. Error: sxe2_eth_pmd_probe_pf() does not set `adapter->cdev` for secondary processes, but sxe2_dev_init() unconditionally calls sxe2_hw_init() -> sxe2_dev_caps_get() -> sxe2_func_caps_get() -> sxe2_drv_dev_caps_get() which dereferences `adapter->cdev`. NULL-deref on secondary attach. sxe2_dev_init() should early-return after setting up ops/burst pointers when rte_eal_process_type() != RTE_PROC_PRIMARY. Error: sxe2_dev_infos_get() assigns `dev_info->tx_queue_offload_capa` twice; the second assignment unconditionally overwrites the first with just RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE. Either drop the second assignment or change it to `|=`. As written, the per-queue Tx offload caps reported to the application are just MBUF_FAST_FREE, contradicting the much longer list a few lines above. Error: sxe2_eth_pmd_remove() leaks the eth_dev port when sxe2_dev_uninit() fails. rte_eth_dev_release_port() is only called on the success path. Either always release or report the leak. Warning: drivers/net/sxe2/meson.build contains a non-ASCII comment ("#执行子目录base,并获取目标对象") and unconditionally adds `-DFPGA_VER_ASIC`, `-DSXE2_DPDK_DRIVER`, `-g` and `-Werror` to cflags. -Werror in particular should be controlled via `-Dwerror=true` at the project level, not hard-coded per-driver. -g should come from the buildtype. FPGA_VER_ASIC is not used in any of the patches in this series; if it is dead, drop it. Warning: sxe2_dev_infos_get() reports a long list of offloads (RSS_HASH, BUFFER_SPLIT, QINQ_STRIP, VLAN_EXTEND, TCP_LRO, several tunnel TSOs, PTP timestamp, etc.) that are not implemented anywhere in the v10 series. Either implement them, or trim dev_info to what actually works. Currently testpmd will accept and silently ignore flags the driver claims to support. Warning: sxe2_dev_infos_get() sets nb_rx_queues and nb_tx_queues from `dev->data->nb_rx_queues`/nb_tx_queues - those are the currently-configured queue counts, not the driver capability. rte_eth_dev_info_get callers expect maxima. Warning: sxe2_main_vsi_create() error path does `sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi)` but sxe2_vsi_node_free() rte_free()s the pointer and then does `vsi = NULL` on its local variable, leaving `adapter->vsi_ctxt.main_vsi` dangling. sxe2_vsi_uninit() and sxe2_dev_close() later read this field. Set adapter->vsi_ctxt.main_vsi = NULL in the caller after free. Warning: SXE2_ETH_OVERHEAD in sxe2_ethdev.h hardcodes `+ SXE2_VLAN_TAG_SIZE * 2` (8 bytes for QinQ) but the driver does not actually advertise/implement QINQ_STRIP/QINQ_INSERT in v10. This is the "hardcoded VLAN overhead in a PMD that does not support VLAN" pattern - either reflect the driver's actual capability or implement QinQ. Info: sxe2_vsi.h defines an `enum sxe2_vsi_type` and a parallel set of `sxe2_VSI_DOWN`/`sxe2_VSI_CLOSE` enums with lower-case prefixes - normalise to upper-case SXE2_ prefix. Info: sxe2_vsi.c sxe2_vsi_node_alloc() logs "Failed to malloc vf vsi struct" even when the caller is creating a PF VSI - generic message would be clearer. Info: sxe2_dev_close() does not yet release queues; this is added piecemeal in later patches but the dev_ops table in patch 8 still does not register .rx_queue_release / .tx_queue_release. See note on patch 8. Patch 06/10 drivers: support PCI BAR mapping --------------------------------------------- Error (critical): sxe2_dev_pci_map_init() in drivers/net/sxe2/sxe2_ethdev.c uses inverted error checks for every sxe2_dev_pci_res_seg_map() call. sxe2_dev_pci_res_seg_map() returns SXE2_SUCCESS (0) on success, but the caller does: ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX, txq_cnt, txq_base); if (!ret) { PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret); goto l_free_seg1; } `!ret` is true on success. All five mapping calls (DOORBELL_TX, DOORBELL_RX_TAIL, IRQ_DYN, IRQ_ITR, IRQ_MSIX) have the same inversion. The result is: - First successful map -> goto l_free_seg1 -> frees bar_info[1].seg_info, bar_info[0].seg_info, bar_info. - But `map_ctxt->bar_info = bar_info;` was set just above the goto, so map_ctxt->bar_info is now a dangling pointer. - ret stays 0, so the function returns SUCCESS. - Caller (sxe2_dev_init) proceeds. - Subsequent access to map_ctxt->bar_info from sxe2_dev_get_bar_info() / sxe2_pci_map_addr_get() is use-after-free. - sxe2_dev_pci_map_uinit() will also UAF on map_ctxt->bar_info. This bug means the driver cannot have been exercised on real hardware in this form. Fix all five checks to `if (ret)`. Warning: sxe2_dev_pci_seg_map() uses size_t for org_len in the printf format `%zu` in patch 6 but the prototype takes `u64`. Patch 9 then changes the format to PRIu64. Use PRIu64 from the start to avoid the noise commit. Patch 07/10 common/sxe2: add ioctl interface for DMA map and IRQ ----------------------------------------------------------------- Warning: sxe2_drv_dev_dma_unmap() does `if (!cdev->config.support_iommu) return SXE2_SUCCESS;` mid-function, while every other path in the same file uses `goto l_end`. Make the style consistent. Info: sxe2_common_pci_dma_map()/dma_unmap() error logs both say "Fail to get remove device" - copy/paste from sxe2_common_pci_remove(). Use a message that matches the operation. Patch 08/10 net/sxe2: support queue setup and control ------------------------------------------------------ Error: sxe2_rx_queue_mbufs_alloc() error path uses mbuf->next which was never initialised: for (i = 0; i < rxq->ring_depth; i++) { mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool); if (mbuf == NULL) { ... goto l_err_free_mbuf; } buf_ring[i] = mbuf; ... } else { mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); if (unlikely(!mbuf_pay)) { ... goto l_err_free_mbuf; } ... mbuf->next = mbuf_pay; } } l_err_free_mbuf: for (j = 0; j <= i; j++) { if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) { rte_pktmbuf_free(buf_ring[j]->next); ... } ... } When mbuf_pay allocation fails on iteration i, buf_ring[i] is set to mbuf but mbuf->next has not yet been assigned — rte_mbuf_raw_alloc() does not zero ->next. The cleanup loop then reads `buf_ring[i]->next` (uninitialised pointer) and may pass it to rte_pktmbuf_free(). Use rte_pktmbuf_alloc() (which initialises ->next), or set mbuf->next = NULL right after the raw_alloc. Error: the dev_ops table in sxe2_ethdev.c is updated to add .rx_queue_setup, .tx_queue_setup, .rxq_info_get, .txq_info_get, but not .rx_queue_release / .tx_queue_release. All Rx/Tx queue memory (descriptor ring memzone, buffer ring, queue struct) is leaked when ports are reconfigured or closed. Wire up sxe2_rx_queue_release() and sxe2_tx_queue_release() into dev_ops. Warning: sxe2_txqs_all_start() / sxe2_txqs_all_stop() / sxe2_rxqs_all_start() / sxe2_rxqs_all_stop() declare their `dev` parameter as `__rte_unused` but then read `dev->data`. Drop the __rte_unused annotation; otherwise readers will assume the parameter really is unused. Warning: sxe2_tx.c sxe2_txqs_all_start() opens with s32 __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused) { struct rte_eth_dev_data *data = dev->data; The first line of the function body has no leading tab, breaking the indent. Warning: sxe2_rx_queue_alloc() and sxe2_tx_queue_alloc() call sxe2_rx_queue_release(dev, queue_idx) / sxe2_tx_queue_release(dev, queue_idx) on the error path right after a fresh rte_zmalloc_socket() failure, even though dev->data->rx_queues[queue_idx] / dev->data->tx_queues[queue_idx] have not been written yet. These calls are no-ops at runtime but they read oddly - drop them and free the just-allocated objects directly. Warning: sxe2_rx_queue_setup() validates `rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)` and `(offloads & BUFFER_SPLIT) && !(rx_nseg > 1)` — these are validations that ethdev already performs in rte_eth_rx_queue_setup(). The PMD does not need to duplicate them. Info: sxe2_mbuf_raw_alloc() is a one-line wrapper around rte_mbuf_raw_alloc() that adds nothing. Drop it and call rte_mbuf_raw_alloc() directly. Patch 09/10 drivers: add data path for Rx and Tx ------------------------------------------------- Error: sxe2_rx_pkts_scattered_split() in sxe2_txrx_poll.c has a use-after-free / double-free in the RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT path: new_mbuf = NULL; while (done_num < nb_pkts) { ... if ((rxq->offloads & BUFFER_SPLIT) == 0 || first_seg == NULL) { new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool); if (unlikely(new_mbuf == NULL)) { ... break; } } if (rxq->offloads & BUFFER_SPLIT) { new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp); if (unlikely(new_mbuf_pay == NULL)) { ... if (new_mbuf != NULL) rte_pktmbuf_free(new_mbuf); new_mbuf = NULL; break; } } ... When BUFFER_SPLIT is enabled and first_seg != NULL (continuing a multi-descriptor packet), the first `if` block is skipped, so `new_mbuf` keeps the value it had on the previous iteration — which has already been stored in *cur_buffer and is now part of the buffer ring. If new_mbuf_pay alloc fails on this iteration, the `if (new_mbuf != NULL) rte_pktmbuf_free(new_mbuf);` frees an mbuf that is still owned by the buffer ring. The next Rx burst will hand a freed mbuf back to the application (or the NIC will DMA into freed memory once the descriptor is rearmed). Set new_mbuf = NULL at the start of every loop iteration, or only free new_mbuf when this iteration actually allocated it. Warning: sxe2_rx_pkts_scattered_split() also writes `cur_mbuf->next = new_mbuf_pay` (in the first_seg != NULL branch) before any of the subsequent failure checks. If a later step in the same iteration would fail, the mbuf chain is left in an inconsistent state. Reorder so the chain is updated only after all allocations succeed. Warning: sxe2_tx_pkts_prepare() declares its tx_queue parameter as `__rte_unused void *tx_queue` but uses `txq->ring_depth`. Drop the __rte_unused annotation. Warning: sxe2_set_common_function() assigns `dev->rx_pkt_burst = sxe2_rx_pkts_scattered;` but sxe2_rx_mode_func_set() (called later from dev_start) unconditionally overwrites it. The first assignment is either dead code or needs to gate on something — pick one. Info: sxe2_rx_mode_func_set() always selects a scattered Rx burst regardless of `dev->data->scattered_rx`, and without checking whether MTU exceeds rx_buf_len. The non-scattered fast path is never reachable. Either add a non-scattered burst function and dispatch on `scattered_rx`, or document that this PMD always uses scattered Rx and remove the unused split between "scattered" and "single-buffer" code paths. Info: sxe2_txrx.c is hit by a large reformatting of already-correct code in patch 10 (whitespace removal). Either keep the blank lines from patch 9, or do the reflow as a single dedicated cleanup patch. Patch 10/10 net/sxe2: add vectorized Rx and Tx ----------------------------------------------- Error (critical): sxe2_rx_mode_func_set() in sxe2_txrx.c selects the SSE vector Rx burst and then immediately overwrites it: #ifdef RTE_ARCH_X86 if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload; #endif if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split; else dev->rx_pkt_burst = sxe2_rx_pkts_scattered; The vector path is never used in practice. Restructure to either an `if/else if/else`, or gate the scalar selection on `!vec_path`. Error: sxe2_rx_queue_vec_init() in sxe2_txrx_vec.c constructs an mbuf on the stack, sets only buf_addr, nb_segs, data_off, port and refcnt, then reads `*(u64 *)&mbuf_def.rearm_data`. This is correct only as long as rearm_data exactly aliases those four fields and nothing else; this is fragile across DPDK releases. memset() the struct to zero first, then assign the fields. Warning: meson.build adds a Windows-only branch inside `if arch_subdir == 'x86'`: if arch_subdir == 'x86' sources += files('sxe2_txrx_vec_sse.c') if is_windows and cc.get_id() != 'clang' cflags += ['-fno-asynchronous-unwind-tables'] endif endif But the top of meson.build for net/sxe2 already does `if is_windows: build = false; subdir_done()`, so is_windows is always false here. Dead branch — drop it. Warning: the global static `sxe2_net_map_addr_info_pf` table loses its trailing comments in this patch (the SXE2_PCI_MAP_RES_* identifiers no longer label each row). Either keep the labels or convert the table to designated initialisers `[SXE2_PCI_MAP_RES_DOORBELL_TX] = { ... }`. Warning: sxe2_tx_queue_mbufs_release_vec() does `rte_pktmbuf_free_seg(buffer[i].mbuf)` without checking buffer[i].mbuf for NULL. rte_pktmbuf_free_seg() handles NULL on current DPDK, but the rest of this driver guards explicitly; do the same here for consistency. Warning: sxe2_rx_mode_func_set() and sxe2_tx_mode_func_set() set `rx_mode_flags = 0` / `tx_mode_flags = 0`, then check `(rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0` / similar inside an `if` block before any flag has been set — those conditions are tautologically true. Either seed the flags from the caps result first, or drop the redundant guard. Info: sxe2_rx_mode_func_set() ends with a `goto l_end;\nl_end:` that immediately follows — the goto is a no-op, drop it. Info: sxe2_tx_burst_mode_get() / sxe2_rx_burst_mode_get() list only the scalar and SSE variants in their infos[] table, but the AVX2/AVX512 selection in sxe2_tx_mode_func_set() can in principle assign a non-SSE burst function to dev->tx_pkt_burst. Either include the AVX paths or drop the AVX selection logic. ^ permalink raw reply [flat|nested] 143+ messages in thread
* Re: [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (8 preceding siblings ...) 2026-04-30 10:18 ` [PATCH v3 9/9] net/sxe2: add data path for Rx and Tx liujie5 @ 2026-04-30 16:21 ` Stephen Hemminger 2026-04-30 17:02 ` Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-04-30 16:21 UTC (permalink / raw) To: liujie5; +Cc: dev On Thu, 30 Apr 2026 18:18:08 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > This patch set implements core functionality for the SXE PMD, > which is a Linkdata sxe2 ethernet driver. > > V3: Addressed AI comments Please limit submissions of patch update to one bundle per day. Our CI is a limited resource. ^ permalink raw reply [flat|nested] 143+ messages in thread
* Re: [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 ` (9 preceding siblings ...) 2026-04-30 16:21 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver Stephen Hemminger @ 2026-04-30 17:02 ` Stephen Hemminger 10 siblings, 0 replies; 143+ messages in thread From: Stephen Hemminger @ 2026-04-30 17:02 UTC (permalink / raw) To: liujie5; +Cc: dev On Thu, 30 Apr 2026 18:18:08 +0800 liujie5@linkdatatechnology.com wrote: > From: Jie Liu <liujie5@linkdatatechnology.com> > > This patch set implements core functionality for the SXE PMD, > which is a Linkdata sxe2 ethernet driver. > > V3: Addressed AI comments > > Jie Liu (9): > mailmap: add Jie Liu > doc: add sxe2 guide and release notes > drivers: add sxe2 basic structures > common/sxe2: add base driver skeleton > drivers: add base driver probe skeleton > drivers: support PCI BAR mapping > common/sxe2: add ioctl interface for DMA map and unmap > net/sxe2: support queue setup and control > net/sxe2: add data path for Rx and Tx > > .mailmap | 1 + > doc/guides/nics/features/sxe2.ini | 11 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/sxe2.rst | 23 + > doc/guides/rel_notes/release_26_07.rst | 3 + > drivers/common/sxe2/meson.build | 15 + > drivers/common/sxe2/sxe2_common.c | 684 +++++++++++++++ > drivers/common/sxe2/sxe2_common.h | 86 ++ > drivers/common/sxe2/sxe2_common_log.c | 75 ++ > drivers/common/sxe2/sxe2_common_log.h | 263 ++++++ > drivers/common/sxe2/sxe2_errno.h | 110 +++ > drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++++ > drivers/common/sxe2/sxe2_internal_ver.h | 33 + > drivers/common/sxe2/sxe2_ioctl_chnl.c | 326 +++++++ > drivers/common/sxe2/sxe2_ioctl_chnl.h | 141 +++ > drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 63 ++ > drivers/common/sxe2/sxe2_osal.h | 582 ++++++++++++ > drivers/common/sxe2/sxe2_type.h | 64 ++ > drivers/meson.build | 1 + > drivers/net/meson.build | 1 + > drivers/net/sxe2/meson.build | 26 + > drivers/net/sxe2/sxe2_cmd_chnl.c | 319 +++++++ > drivers/net/sxe2/sxe2_cmd_chnl.h | 33 + > drivers/net/sxe2/sxe2_drv_cmd.h | 398 +++++++++ > drivers/net/sxe2/sxe2_ethdev.c | 975 +++++++++++++++++++++ > drivers/net/sxe2/sxe2_ethdev.h | 316 +++++++ > drivers/net/sxe2/sxe2_irq.h | 49 ++ > drivers/net/sxe2/sxe2_queue.c | 39 + > drivers/net/sxe2/sxe2_queue.h | 227 +++++ > drivers/net/sxe2/sxe2_rx.c | 579 ++++++++++++ > drivers/net/sxe2/sxe2_rx.h | 34 + > drivers/net/sxe2/sxe2_tx.c | 447 ++++++++++ > drivers/net/sxe2/sxe2_tx.h | 32 + > drivers/net/sxe2/sxe2_txrx.c | 249 ++++++ > drivers/net/sxe2/sxe2_txrx.h | 21 + > drivers/net/sxe2/sxe2_txrx_common.h | 541 ++++++++++++ > drivers/net/sxe2/sxe2_txrx_poll.c | 782 +++++++++++++++++ > drivers/net/sxe2/sxe2_txrx_poll.h | 16 + > drivers/net/sxe2/sxe2_vsi.c | 211 +++++ > drivers/net/sxe2/sxe2_vsi.h | 205 +++++ > 40 files changed, 8689 insertions(+) > create mode 100644 doc/guides/nics/features/sxe2.ini > create mode 100644 doc/guides/nics/sxe2.rst > create mode 100644 drivers/common/sxe2/meson.build > create mode 100644 drivers/common/sxe2/sxe2_common.c > create mode 100644 drivers/common/sxe2/sxe2_common.h > create mode 100644 drivers/common/sxe2/sxe2_common_log.c > create mode 100644 drivers/common/sxe2/sxe2_common_log.h > create mode 100644 drivers/common/sxe2/sxe2_errno.h > create mode 100644 drivers/common/sxe2/sxe2_host_regs.h > create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h > create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h > create mode 100644 drivers/common/sxe2/sxe2_osal.h > create mode 100644 drivers/common/sxe2/sxe2_type.h > create mode 100644 drivers/net/sxe2/meson.build > create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c > create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h > create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h > create mode 100644 drivers/net/sxe2/sxe2_ethdev.c > create mode 100644 drivers/net/sxe2/sxe2_ethdev.h > create mode 100644 drivers/net/sxe2/sxe2_irq.h > create mode 100644 drivers/net/sxe2/sxe2_queue.c > create mode 100644 drivers/net/sxe2/sxe2_queue.h > create mode 100644 drivers/net/sxe2/sxe2_rx.c > create mode 100644 drivers/net/sxe2/sxe2_rx.h > create mode 100644 drivers/net/sxe2/sxe2_tx.c > create mode 100644 drivers/net/sxe2/sxe2_tx.h > create mode 100644 drivers/net/sxe2/sxe2_txrx.c > create mode 100644 drivers/net/sxe2/sxe2_txrx.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h > create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c > create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h > create mode 100644 drivers/net/sxe2/sxe2_vsi.c > create mode 100644 drivers/net/sxe2/sxe2_vsi.h > Since this is a large patch series turned to command line use of AGENTS.md for AI review and cot this nice summary. Lots to fix here. ● Deep Dive Analysis of bundle-1856.mbox DPDK Patch Series Executive Summary This patch series introduces the SXE2 Poll Mode Driver (PMD) for DPDK, submitted by Jie Liu from Linkdata Technology (formerly Wuxi Stars Micro System Technologies Co., Ltd.). The driver provides support for 10/25/50/100/200 Gbps Network Adapters with embedded switch functionality, Physical Functions (PF), and SR-IOV Virtual Functions (VF). --- Patch Series Overview The series consists of 9 patches (v3 revision): 1. Patch 1/9: Mailmap update - adds contributor entry 2. Patch 2/9: Documentation - guide and release notes for SXE2 PMD 3. Patch 3/9: Basic structures - common library infrastructure (1959 lines) 4. Patch 4/9: Base driver skeleton - core functionality 5. Patch 5/9: Probe skeleton - PCI device initialization 6. Patch 6/9: PCI BAR mapping support 7. Patch 7/9: DMA mapping via ioctl interface 8. Patch 8/9: Queue setup and control (1161 lines) 9. Patch 9/9: Data path implementation for Rx/Tx (1076 lines) --- Technical Architecture Analysis 1. Driver Structure - Common library (drivers/common/sxe2/) - shared functionality - Network driver (drivers/net/sxe2/) - ethernet-specific implementation - Uses DPDK's standard PMD framework with proper eth_dev_ops callbacks 2. Key Components - OSAL (OS Abstraction Layer): Platform-independent interface - Logging system: Dual logging to console and file (/var/log/sxe2pmd.log.*) - Command channel: Communication with hardware via ioctl - DMA mapping: User-space to device memory mapping 3. Hardware Interface - Supports multiple speeds: 10/25/50/100/200 Gbps - Virtual memory address handling for security - Embedded switch support - SR-IOV capability --- Code Quality Issues Identified Critical Issues 1. License Header Inconsistency - Line 355: # Copyright (c) 2023 Corigine, Inc. in meson.build - All other files show 2025 Wuxi/Linkdata copyright - Impact: Legal/compliance issue - appears to be copy-pasted from another driver 2. Custom Type Definitions - Lines 2327-2339: Defines custom types (u8, u16, u32, u64, s8, etc.) - Problem: Goes against DPDK coding standards which prefer standard types - Recommendation: Use uint8_t, uint16_t, etc. directly 3. Debug Logging to File System - Lines 386-427: Creates log files in /var/log/ during debug mode - Issues: - No permission checking - No disk space validation - Could fail in containerized environments - Security concern: world-readable logs Moderate Issues 4. Excessive Macro Complexity - Lines 514-652: Complex nested logging macros - Multiple levels of macro expansion - Makes debugging difficult 5. Missing Error Handling - Line 421: goto l_end pattern used but incomplete error paths - Some functions don't check all allocation failures 6. Documentation Quality - Line 191: Missing newline at end of sxe2.ini file - Minimal driver documentation in sxe2.rst - No performance tuning guidance Minor Issues 7. Code Style - Inconsistent use of tabs vs spaces - Non-standard label naming (l_end: instead of descriptive names) - Mixed naming conventions (snake_case and camelCase) 8. Thread ID Portability - Lines 495-498: Windows/Linux compatibility code seems incomplete - pthread_self() cast to uint64_t may cause issues on some platforms --- Security Considerations 1. DMA Mapping Interface - Patch 7/9 implements ioctl-based DMA mapping - Positive: Includes proper IOVA mode checking - Concern: No apparent validation of user-provided addresses/sizes 2. Memory Safety - Uses rte_memcpy and safe string functions - No obvious buffer overflows detected 3. Virtual Memory Handling - Claims to only work with virtual addresses for security - Good design choice for isolation --- Performance Considerations 1. Data Path - Patch 9/9 mentions "loop unrolling and prefetching" - Standard PMD burst interface implementation - No vector instructions optimization visible 2. Queue Management - Proper separation of Rx/Tx queue setup - Hardware tail pointer updates for efficiency --- Recommendations Must Fix Before Merge 1. Remove or correct the Corigine copyright in meson.build 2. Replace custom type definitions with standard DPDK types 3. Add proper error handling for file operations 4. Validate DMA mapping parameters Should Improve 1. Enhance documentation with usage examples 2. Simplify logging macro hierarchy 3. Add performance tuning guide 4. Include test cases Nice to Have 1. Vector PMD implementation for better performance 2. More comprehensive feature set in sxe2.ini 3. Debugging/troubleshooting guide --- Overall Assessment The SXE2 PMD shows a structured approach to DPDK driver development with proper separation of concerns between common and network-specific code. However, several code quality and compliance issues need addressing before the patch series is ready for merge. Rating: 6/10 - Functional but needs significant cleanup ^ permalink raw reply [flat|nested] 143+ messages in thread
end of thread, other threads:[~2026-05-13 14:45 UTC | newest] Thread overview: 143+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-30 7:01 [PATCH v1 0/9] common/sxe2: add common functions for sxe2 driver liujie5 2026-04-30 7:01 ` [PATCH v1 1/9] mailmap: add Jie Liu liujie5 2026-04-30 7:01 ` [PATCH v1 2/9] doc: add sxe2 guide and release notes liujie5 2026-04-30 7:01 ` [PATCH v1 3/9] drivers: add sxe2 basic structures liujie5 2026-04-30 7:01 ` [PATCH v1 4/9] common/sxe2: add base driver skeleton liujie5 2026-04-30 7:01 ` [PATCH v1 5/9] drivers: add base driver probe skeleton liujie5 2026-04-30 7:01 ` [PATCH v1 6/9] drivers: support PCI BAR mapping liujie5 2026-04-30 7:01 ` [PATCH v1 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-04-30 7:01 ` [PATCH v1 8/9] net/sxe2: support queue setup and control liujie5 2026-04-30 7:01 ` [PATCH v1 9/9] net/sxe2: add data path for Rx and Tx liujie5 2026-04-30 9:22 ` [PATCH v2 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 9:22 ` [PATCH v2 1/9] mailmap: add Jie Liu liujie5 2026-04-30 9:22 ` [PATCH v2 2/9] doc: add sxe2 guide and release notes liujie5 2026-04-30 9:22 ` [PATCH v2 3/9] drivers: add sxe2 basic structures liujie5 2026-04-30 9:22 ` [PATCH v2 4/9] common/sxe2: add base driver skeleton liujie5 2026-04-30 9:22 ` [PATCH v2 5/9] drivers: add base driver probe skeleton liujie5 2026-04-30 9:22 ` [PATCH v2 6/9] drivers: support PCI BAR mapping liujie5 2026-04-30 9:22 ` [PATCH v2 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-04-30 9:22 ` [PATCH v2 8/9] net/sxe2: support queue setup and control liujie5 2026-04-30 9:22 ` [PATCH v2 9/9] net/sxe2: add data path for Rx and Tx liujie5 2026-04-30 10:18 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-04-30 10:18 ` [PATCH v3 1/9] mailmap: add Jie Liu liujie5 2026-04-30 10:18 ` [PATCH v3 2/9] doc: add sxe2 guide and release notes liujie5 2026-04-30 10:18 ` [PATCH v3 3/9] drivers: add sxe2 basic structures liujie5 2026-04-30 10:18 ` [PATCH v3 4/9] common/sxe2: add base driver skeleton liujie5 2026-04-30 10:18 ` [PATCH v3 5/9] drivers: add base driver probe skeleton liujie5 2026-04-30 10:18 ` [PATCH v3 6/9] drivers: support PCI BAR mapping liujie5 2026-04-30 10:18 ` [PATCH v3 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-04-30 10:18 ` [PATCH v3 8/9] net/sxe2: support queue setup and control liujie5 2026-04-30 10:18 ` [PATCH v3 9/9] net/sxe2: add data path for Rx and Tx liujie5 2026-05-01 1:59 ` [PATCH v4 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 1:59 ` [PATCH v4 1/9] mailmap: add Jie Liu liujie5 2026-05-01 1:59 ` [PATCH v4 2/9] doc: add sxe2 guide and release notes liujie5 2026-05-01 1:59 ` [PATCH v4 3/9] drivers: add sxe2 basic structures liujie5 2026-05-01 3:05 ` Stephen Hemminger 2026-05-01 1:59 ` [PATCH v4 4/9] common/sxe2: add base driver skeleton liujie5 2026-05-01 1:59 ` [PATCH v4 5/9] drivers: add base driver probe skeleton liujie5 2026-05-01 1:59 ` [PATCH v4 6/9] drivers: support PCI BAR mapping liujie5 2026-05-01 1:59 ` [PATCH v4 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-01 1:59 ` [PATCH v4 8/9] net/sxe2: support queue setup and control liujie5 2026-05-01 1:59 ` [PATCH v4 9/9] net/sxe2: add data path for Rx and Tx liujie5 2026-05-01 3:33 ` [PATCH v5 0/9] net/sxe2: added Linkdata sxe2 ethernet driver liujie5 2026-05-01 3:33 ` [PATCH v5 1/9] mailmap: add Jie Liu liujie5 2026-05-01 3:33 ` [PATCH v5 2/9] doc: add sxe2 guide and release notes liujie5 2026-05-01 3:33 ` [PATCH v5 3/9] drivers: add sxe2 basic structures liujie5 2026-05-01 14:46 ` Stephen Hemminger 2026-05-01 3:33 ` [PATCH v5 4/9] common/sxe2: add base driver skeleton liujie5 2026-05-01 3:33 ` [PATCH v5 5/9] drivers: add base driver probe skeleton liujie5 2026-05-01 3:33 ` [PATCH v5 6/9] drivers: support PCI BAR mapping liujie5 2026-05-01 3:33 ` [PATCH v5 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-01 3:33 ` [PATCH v5 8/9] net/sxe2: support queue setup and control liujie5 2026-05-01 3:33 ` [PATCH v5 9/9] net/sxe2: add data path for Rx and Tx liujie5 2026-05-06 2:12 ` [PATCH v6 00/10] Add sxe2 driver liujie5 2026-05-06 2:12 ` [PATCH v6 01/10] mailmap: add Jie Liu liujie5 2026-05-06 2:12 ` [PATCH v6 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 2:12 ` [PATCH v6 03/10] drivers: add sxe2 basic structures liujie5 2026-05-06 2:12 ` [PATCH v6 04/10] common/sxe2: add base driver skeleton liujie5 2026-05-06 2:12 ` [PATCH v6 05/10] drivers: add base driver probe skeleton liujie5 2026-05-06 2:12 ` [PATCH v6 06/10] drivers: support PCI BAR mapping liujie5 2026-05-06 2:12 ` [PATCH v6 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-06 2:12 ` [PATCH v6 08/10] net/sxe2: support queue setup and control liujie5 2026-05-06 2:12 ` [PATCH v6 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 2:12 ` [PATCH v6 10/10] net/sxe2: add vectorized " liujie5 2026-05-06 3:31 ` [PATCH v7 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 3:31 ` [PATCH v7 01/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 3:31 ` [PATCH v7 02/10] drivers: add sxe2 basic structures liujie5 2026-05-06 3:31 ` [PATCH v7 03/10] common/sxe2: add base driver skeleton liujie5 2026-05-06 3:31 ` [PATCH v7 04/10] drivers: add base driver probe skeleton liujie5 2026-05-06 3:31 ` [PATCH v7 05/10] drivers: support PCI BAR mapping liujie5 2026-05-06 3:31 ` [PATCH v7 06/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-06 3:31 ` [PATCH v7 07/10] net/sxe2: support queue setup and control liujie5 2026-05-06 3:31 ` [PATCH v7 08/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 3:31 ` [PATCH v7 09/10] net/sxe2: add vectorized " liujie5 2026-05-06 6:12 ` [PATCH v8 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 6:12 ` [PATCH v8 01/10] mailmap: add Jie Liu liujie5 2026-05-06 6:12 ` [PATCH v8 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 6:12 ` [PATCH v8 03/10] drivers: add sxe2 basic structures liujie5 2026-05-06 6:12 ` [PATCH v8 04/10] common/sxe2: add base driver skeleton liujie5 2026-05-06 6:12 ` [PATCH v8 05/10] drivers: add base driver probe skeleton liujie5 2026-05-06 6:12 ` [PATCH v8 06/10] drivers: support PCI BAR mapping liujie5 2026-05-06 6:12 ` [PATCH v8 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-06 6:12 ` [PATCH v8 08/10] net/sxe2: support queue setup and control liujie5 2026-05-06 6:12 ` [PATCH v8 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 6:12 ` [PATCH v8 10/10] net/sxe2: add vectorized " liujie5 2026-05-06 9:56 ` [PATCH v9 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 9:56 ` [PATCH v9 01/10] mailmap: add Jie Liu liujie5 2026-05-06 9:56 ` [PATCH v9 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 9:56 ` [PATCH v9 03/10] drivers: add sxe2 basic structures liujie5 2026-05-06 9:56 ` [PATCH v9 04/10] common/sxe2: add base driver skeleton liujie5 2026-05-06 9:56 ` [PATCH v9 05/10] drivers: add base driver probe skeleton liujie5 2026-05-06 9:56 ` [PATCH v9 06/10] drivers: support PCI BAR mapping liujie5 2026-05-06 9:56 ` [PATCH v9 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-06 9:57 ` [PATCH v9 08/10] net/sxe2: support queue setup and control liujie5 2026-05-06 9:57 ` [PATCH v9 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 9:57 ` [PATCH v9 10/10] net/sxe2: add vectorized " liujie5 2026-05-06 11:35 ` [PATCH v10 00/10] Add Linkdata sxe2 driver liujie5 2026-05-06 11:35 ` [PATCH v10 01/10] mailmap: add Jie Liu liujie5 2026-05-06 11:35 ` [PATCH v10 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-06 11:35 ` [PATCH v10 03/10] drivers: add sxe2 basic structures liujie5 2026-05-06 11:35 ` [PATCH v10 04/10] common/sxe2: add base driver skeleton liujie5 2026-05-06 11:35 ` [PATCH v10 05/10] drivers: add base driver probe skeleton liujie5 2026-05-06 11:35 ` [PATCH v10 06/10] drivers: support PCI BAR mapping liujie5 2026-05-06 11:35 ` [PATCH v10 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-06 11:35 ` [PATCH v10 08/10] net/sxe2: support queue setup and control liujie5 2026-05-06 11:35 ` [PATCH v10 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-06 11:35 ` [PATCH v10 10/10] net/sxe2: add vectorized " liujie5 2026-05-07 1:44 ` [PATCH v11 0/9] Add Linkdata sxe2 driver liujie5 2026-05-07 1:44 ` [PATCH v11 1/9] mailmap: add Jie Liu liujie5 2026-05-07 1:44 ` [PATCH v11 2/9] doc: add sxe2 guide and release notes liujie5 2026-05-07 1:44 ` [PATCH v11 3/9] drivers: add sxe2 basic structures liujie5 2026-05-07 1:44 ` [PATCH v11 4/9] common/sxe2: add base driver skeleton liujie5 2026-05-07 1:44 ` [PATCH v11 5/9] drivers: add base driver probe skeleton liujie5 2026-05-07 1:44 ` [PATCH v11 6/9] drivers: support PCI BAR mapping liujie5 2026-05-07 1:44 ` [PATCH v11 7/9] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-07 1:44 ` [PATCH v11 8/9] net/sxe2: support queue setup and control liujie5 2026-05-07 1:44 ` [PATCH v11 9/9] drivers: add data path for Rx and Tx liujie5 2026-05-07 2:40 ` [PATCH v11 0/9] Add Linkdata sxe2 driver Stephen Hemminger 2026-05-12 8:06 ` [PATCH v12 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 8:06 ` [PATCH v12 01/10] mailmap: add Jie Liu liujie5 2026-05-12 8:06 ` [PATCH v12 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-12 8:06 ` [PATCH v12 03/10] common/sxe2: add sxe2 basic structures liujie5 2026-05-12 8:06 ` [PATCH v12 04/10] drivers: add base driver skeleton liujie5 2026-05-12 8:06 ` [PATCH v12 05/10] drivers: add base driver probe skeleton liujie5 2026-05-12 8:06 ` [PATCH v12 06/10] drivers: support PCI BAR mapping liujie5 2026-05-12 8:06 ` [PATCH v12 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-12 8:06 ` [PATCH v12 08/10] net/sxe2: support queue setup and control liujie5 2026-05-12 8:06 ` [PATCH v12 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-12 8:06 ` [PATCH v12 10/10] net/sxe2: add vectorized " liujie5 2026-05-12 11:36 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback liujie5 2026-05-12 11:36 ` [PATCH v13 01/10] mailmap: add Jie Liu liujie5 2026-05-12 11:36 ` [PATCH v13 02/10] doc: add sxe2 guide and release notes liujie5 2026-05-12 11:36 ` [PATCH v13 03/10] common/sxe2: add sxe2 basic structures liujie5 2026-05-12 11:36 ` [PATCH v13 04/10] drivers: add base driver skeleton liujie5 2026-05-12 11:36 ` [PATCH v13 05/10] drivers: add base driver probe skeleton liujie5 2026-05-12 11:36 ` [PATCH v13 06/10] drivers: support PCI BAR mapping liujie5 2026-05-12 11:36 ` [PATCH v13 07/10] common/sxe2: add ioctl interface for DMA map and unmap liujie5 2026-05-12 11:36 ` [PATCH v13 08/10] net/sxe2: support queue setup and control liujie5 2026-05-12 11:36 ` [PATCH v13 09/10] drivers: add data path for Rx and Tx liujie5 2026-05-12 11:36 ` [PATCH v13 10/10] net/sxe2: add vectorized " liujie5 2026-05-13 14:45 ` [PATCH v13 00/10] net/sxe2: fix logic errors and address feedback Stephen Hemminger 2026-05-07 0:23 ` [PATCH v10 00/10] Add Linkdata sxe2 driver Stephen Hemminger 2026-04-30 16:21 ` [PATCH v3 0/9] net/sxe2: added Linkdata sxe2 ethernet driver Stephen Hemminger 2026-04-30 17:02 ` Stephen Hemminger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox