* [PATCH 0/7] hinic3 change for support new SPx NIC
@ 2026-01-31 10:05 Feifei Wang
2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang
` (6 more replies)
0 siblings, 7 replies; 80+ messages in thread
From: Feifei Wang @ 2026-01-31 10:05 UTC (permalink / raw)
To: dev; +Cc: Feifei Wang
From: Feifei Wang <wangfeifei40@huawei.com>
Change hinic3 driver to support Huawei new SPx series NIC.
Feifei Wang (7):
net/hinic3: add support for new SPx series NIC
net/hinic3: add enhance cmdq support for new SPx series NIC
net/hinic3: use different callback func to split new/old cmdq
operations
net/hinic3: add fun init ops to support Compact CQE
net/hinic3: add rx ops to support Compact CQE
net/hinic3: add tx ops to support Compact CQE
net/hinic3: use different callback func to support htn fdir
drivers/net/hinic3/base/hinic3_cmd.h | 145 +++--
drivers/net/hinic3/base/hinic3_cmdq.c | 400 +++++-------
drivers/net/hinic3/base/hinic3_cmdq.h | 65 +-
drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 110 ++++
drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 169 +++++
drivers/net/hinic3/base/hinic3_csr.h | 16 +-
drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +-
drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +-
drivers/net/hinic3/base/hinic3_hwdev.c | 13 +-
drivers/net/hinic3/base/hinic3_hwdev.h | 18 +
drivers/net/hinic3/base/hinic3_hwif.c | 4 +-
drivers/net/hinic3/base/hinic3_mgmt.c | 5 +-
drivers/net/hinic3/base/hinic3_mgmt.h | 2 +
drivers/net/hinic3/base/hinic3_nic_cfg.c | 167 +++--
drivers/net/hinic3/base/hinic3_nic_cfg.h | 104 ++--
drivers/net/hinic3/base/meson.build | 1 +
drivers/net/hinic3/hinic3_ethdev.c | 240 +++++--
drivers/net/hinic3/hinic3_ethdev.h | 132 ++--
drivers/net/hinic3/hinic3_fdir.c | 589 ++++++++++++------
drivers/net/hinic3/hinic3_fdir.h | 373 +++++++++--
drivers/net/hinic3/hinic3_nic_io.c | 507 +++++++--------
drivers/net/hinic3/hinic3_nic_io.h | 147 +++++
drivers/net/hinic3/hinic3_rx.c | 235 +++++--
drivers/net/hinic3/hinic3_rx.h | 147 +++++
drivers/net/hinic3/hinic3_tx.c | 463 +++++++-------
drivers/net/hinic3/hinic3_tx.h | 144 ++++-
.../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 163 +++++
.../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++
drivers/net/hinic3/htn_adapt/meson.build | 7 +
.../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 147 +++++
.../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 ++
drivers/net/hinic3/stn_adapt/meson.build | 7 +
32 files changed, 3268 insertions(+), 1391 deletions(-)
create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c
create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h
create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c
create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h
create mode 100644 drivers/net/hinic3/htn_adapt/meson.build
create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c
create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h
create mode 100644 drivers/net/hinic3/stn_adapt/meson.build
--
2.45.1.windows.1
^ permalink raw reply [flat|nested] 80+ messages in thread* [PATCH 1/7] net/hinic3: add support for new SPx series NIC 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-01-31 10:05 ` Feifei Wang 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 more replies) 2026-01-31 10:05 ` [PATCH 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC Feifei Wang ` (5 subsequent siblings) 6 siblings, 4 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:05 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to suuport Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 16 ++++++++-------- drivers/net/hinic3/base/hinic3_hwif.c | 4 +++- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..e601ffafa7 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else #define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0X0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..24afec3d1b 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,9 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +#define HINIC3_IS_VF_DEV(pdev) ( \ + (pdev)->id.device_id == HINIC3_DEV_ID_VF_SP620 || \ + (pdev)->id.device_id == HINIC3_DEV_ID_VF_SP230) uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0e25175ba1..a5116264b0 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3525,13 +3525,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 0/7] hinic3 change for support new SPx NIC 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (10 more replies) 2026-03-18 2:19 ` [v3 " Feifei Wang ` (2 subsequent siblings) 3 siblings, 11 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: chenyi221 From: chenyi221 <chenyi221@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 280 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 265 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 33 files changed, 3362 insertions(+), 1443 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [V2 1/7] net/hinic3: add support for new SPx series NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (9 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to suuport Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..c708019c43 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0X0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-16 13:43 ` [V2 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (8 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..d815a3800d 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 39, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..41f411fcbd 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_smp_rmb(); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-16 13:43 ` [V2 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-16 13:43 ` [V2 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (7 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 130 ++++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..780b17414a 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..5176f17f09 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +257,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..c5d32a33bb --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..1245b9c8d8 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..fa16508d32 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / sizeof(uint32_t); + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (2 preceding siblings ...) 2026-03-16 13:43 ` [V2 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 5/7] net/hinic3: add rx " Feifei Wang ` (6 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 213 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 61 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 578 insertions(+), 436 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 780b17414a..ed2edb51c1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX : HINIC3_COS_NUM_MAX_HTN; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,53 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(ci_mz); + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1245,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1293,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1320,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1372,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1385,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1413,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1424,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1448,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1465,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3348,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3413,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3387,9 +3488,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + + hinic3_nic_tx_rx_ops_init(nic_dev); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { @@ -3481,10 +3584,22 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) alloc_mc_list_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; +alloc_cmdq_ops_fail: alloc_eth_addr_fail: PMD_DRV_LOG(ERR, "Initialize %s in primary failed", eth_dev->data->name); + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + return err; } diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..98e0bbecf1 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index 5176f17f09..a803861199 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -268,7 +288,8 @@ struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); @@ -279,9 +300,6 @@ struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -296,4 +314,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index c5d32a33bb..dd944c0cf4 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 1245b9c8d8..ffafe39fb5 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index fa16508d32..5e6594f518 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index f8d26e9397..a40c4faa89 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 preceding siblings ...) 2026-03-16 13:43 ` [V2 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 6/7] net/hinic3: add tx " Feifei Wang ` (5 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 240 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 +++++++++++++++++++- 3 files changed, 341 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..e5a5f21df3 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,32 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(__atomic_load_n(&rxq->rq_ci->dw1.value, + __ATOMIC_ACQUIRE)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +731,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +757,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +780,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +814,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +827,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +869,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +966,117 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1087,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1118,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1127,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1158,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..2655802467 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + uint32_t value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (4 preceding siblings ...) 2026-03-16 13:43 ` [V2 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 13:43 ` [V2 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang ` (4 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 454 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 361 insertions(+), 239 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..fca94dd08e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extented wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + /* Task or bd section maybe warpped for one wqe. */ + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V2 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (5 preceding siblings ...) 2026-03-16 13:43 ` [V2 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-16 13:43 ` Feifei Wang 2026-03-16 15:45 ` [V2 0/7] hinic3 change for support new SPx NIC Stephen Hemminger ` (3 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-16 13:43 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 41 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.h | 16 - drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 22 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 8 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 8 + 13 files changed, 888 insertions(+), 351 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index ed2edb51c1..4495ce954b 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2159,8 +2159,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2358,6 +2357,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2528,20 +2533,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_type.tcp_ipv6_ext = rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2598,12 +2605,16 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; - + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_conf->rss_hf |= 0; + rss_conf->rss_hf |= 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index a803861199..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -277,22 +277,6 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); -/** - * Get cmdq ops software tile NIC(stn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); - -/** - * Get cmdq ops hardware tile NIC(htn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); - /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e5a5f21df3..4452103e7e 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_type.tcp_ipv6_ext = rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -796,7 +799,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -812,7 +815,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1016,8 +1018,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index fca94dd08e..1a864d0775 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index dd944c0cf4..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -97,6 +97,17 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, return HINIC3_HTN_CMD_SVLAN_MODIFY; } +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) @@ -119,17 +130,6 @@ static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; } -static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, - struct hinic3_cmd_buf *cmd_buf) -{ - struct hinic3_rss_cmd_header *header = cmd_buf->buf; - - header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(header, sizeof(*header)); -} - static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index ffafe39fb5..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -52,4 +52,12 @@ struct hinic3_htn_vlan_ctx { uint16_t dest_func_id; }; +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + #endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 5e6594f518..00c3b8b895 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index a40c4faa89..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -35,4 +35,12 @@ struct hinic3_stn_vlan_ctx { uint32_t vlan_sel; }; +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + #endif /* _HINIC3_STN_CMDQ_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* Re: [V2 0/7] hinic3 change for support new SPx NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (6 preceding siblings ...) 2026-03-16 13:43 ` [V2 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang @ 2026-03-16 15:45 ` Stephen Hemminger 2026-03-19 2:50 ` 回复: " wangfeifei (J) 2026-03-19 13:52 ` [v6 " Feifei Wang ` (2 subsequent siblings) 10 siblings, 1 reply; 80+ messages in thread From: Stephen Hemminger @ 2026-03-16 15:45 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, chenyi221 On Mon, 16 Mar 2026 21:43:22 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: chenyi221 <chenyi221@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 280 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 265 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 33 files changed, 3362 insertions(+), 1443 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > > -- > 2.45.1.windows.1 > Still lots of things that need to be addressed here. See the following AI patch review. Patch 1/7: net/hinic3: add support for new SPx series NIC HINIC3_DEV_ID_SP230 is defined as 0X0229 with an uppercase X. Every other hex constant in the file uses lowercase 0x. Should be 0x0229. HINIC3_DEV_ID_SP920 has no VF counterpart and no entry in hinic3_is_vf_dev(). Is that intentional? Patch 2/7: net/hinic3: add enhance cmdq support The unified cmdq_sync_cmd() retains rte_smp_rmb() which is a deprecated barrier. Since this function is being rewritten, please convert to rte_atomic_thread_fence(rte_memory_order_acquire). HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 without any explanation in the commit message. If existing normal-cmdq commands relied on the larger buffer this could silently truncate them. Patch 3/7: net/hinic3: use different callback func to split new/old cmdq operations The call sites in hinic3_ethdev.c use hinic3_cmdq_get_stn_ops() and hinic3_cmdq_get_htn_ops() but the actual definitions are hinic3_nic_cmdq_get_stn_ops() and hinic3_nic_cmdq_get_htn_ops(). This will not link. Patch 4 fixes the call site but each commit must compile independently. In htn_adapt/hinic3_htn_cmdq.c the static function prepare_rss_indir_table_cmd_header() is called before it is defined, with no forward declaration. This will fail with -Werror=implicit-function-declaration. Patch 4/7: net/hinic3: add fun init ops to support Compact CQE In hinic3_rx_queue_setup(), when ci_mz allocation fails the code calls hinic3_memzone_free(ci_mz) on a NULL pointer before jumping to the error label. This call is either a NULL dereference or dead code and should be removed. The error-path goto labels in hinic3_func_init() are misordered. The allocation order is mac_addrs, cmdq_ops, rx_ops, tx_ops, mc_list but the cleanup labels do not reverse this properly. For example if cmdq_ops allocation fails, mac_addrs is never freed. If tx_ops fails, neither cmdq_ops nor mac_addrs are freed. This leaks memory on every init failure path. The ternary in hinic3_pf_get_default_cos() assigns HINIC3_COS_NUM_MAX (8) when NIC_F_HTN_CMDQ is set and HINIC3_COS_NUM_MAX_HTN (4) when it is not. The naming suggests this is backwards. Please verify. Patch 5/7: net/hinic3: add rx ops to support Compact CQE hinic3_poll_integrated_cqe_rq_empty() uses __atomic_load_n() with __ATOMIC_ACQUIRE. New code must use rte_atomic_load_explicit() with rte_memory_order_acquire instead of the GCC built-in. Patch 7/7: net/hinic3: use different callback func to support htn fdir Copy-paste bug in hinic3_rss_hash_update() and hinic3_rss_conf_get(). In both functions the else branch has: rss_type.ipv6_ext = 0; rss_type.ipv6_ext = 0; The second line should be rss_type.tcp_ipv6_ext = 0. As written tcp_ipv6_ext is never cleared on non-HTN hardware which will produce incorrect RSS configuration. ^ permalink raw reply [flat|nested] 80+ messages in thread
* 回复: [V2 0/7] hinic3 change for support new SPx NIC 2026-03-16 15:45 ` [V2 0/7] hinic3 change for support new SPx NIC Stephen Hemminger @ 2026-03-19 2:50 ` wangfeifei (J) 0 siblings, 0 replies; 80+ messages in thread From: wangfeifei (J) @ 2026-03-19 2:50 UTC (permalink / raw) To: Stephen Hemminger, Feifei Wang Cc: dev@dpdk.org, chenyi (CY), zengweiliang zengweiliang > -----邮件原件----- > 发件人: Stephen Hemminger <stephen@networkplumber.org> > 发送时间: 2026年3月16日 23:46 > 收件人: Feifei Wang <wff_light@vip.163.com> > 抄送: dev@dpdk.org; chenyi (CY) <chenyi221@huawei.com> > 主题: Re: [V2 0/7] hinic3 change for support new SPx NIC > > On Mon, 16 Mar 2026 21:43:22 +0800 > Feifei Wang <wff_light@vip.163.com> wrote: > > > From: chenyi221 <chenyi221@huawei.com> > > > > Change hinic3 driver to support Huawei new SPx series NIC. > > > > v2: > > --fix build issues > > > > Feifei Wang (7): > > net/hinic3: add support for new SPx series NIC > > net/hinic3: add enhance cmdq support for new SPx series NIC > > net/hinic3: use different callback func to split new/old cmdq > > operations > > net/hinic3: add fun init ops to support Compact CQE > > net/hinic3: add rx ops to support Compact CQE > > net/hinic3: add tx ops to support Compact CQE > > net/hinic3: use different callback func to support htn fdir > > > > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > > drivers/net/hinic3/base/meson.build | 1 + > > drivers/net/hinic3/hinic3_ethdev.c | 280 ++++++-- > > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > > drivers/net/hinic3/hinic3_rx.c | 265 +++++-- > > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > > drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ > > drivers/net/hinic3/hinic3_tx.h | 154 +++- > > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > > drivers/net/hinic3/htn_adapt/meson.build | 7 + > > drivers/net/hinic3/meson.build | 8 +- > > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > > drivers/net/hinic3/stn_adapt/meson.build | 7 + > > 33 files changed, 3362 insertions(+), 1443 deletions(-) create mode > > 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > > > > -- > > 2.45.1.windows.1 > > > > Still lots of things that need to be addressed here. > See the following AI patch review. > > > Patch 1/7: net/hinic3: add support for new SPx series NIC > > HINIC3_DEV_ID_SP230 is defined as 0X0229 with an uppercase X. > Every other hex constant in the file uses lowercase 0x. Should be 0x0229. [Feifei] Done > > HINIC3_DEV_ID_SP920 has no VF counterpart and no entry in hinic3_is_vf_dev(). > Is that intentional? [Feifei] Yes, SP920 now doesn't support VF function. > > Patch 2/7: net/hinic3: add enhance cmdq support > > The unified cmdq_sync_cmd() retains rte_smp_rmb() which is a deprecated barrier. > Since this function is being rewritten, please convert to > rte_atomic_thread_fence(rte_memory_order_acquire). [Feifei] Done > > HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 without any explanation in > the commit message. If existing normal-cmdq commands relied on the larger > buffer this could silently truncate them. [Feifei] Done, explanation is written in patch 2 commit message: "HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs" > > Patch 3/7: net/hinic3: use different callback func to split new/old cmdq operations > > The call sites in hinic3_ethdev.c use hinic3_cmdq_get_stn_ops() and > hinic3_cmdq_get_htn_ops() but the actual definitions are > hinic3_nic_cmdq_get_stn_ops() and hinic3_nic_cmdq_get_htn_ops(). > This will not link. Patch 4 fixes the call site but each commit must compile > independently. [Feifei] Done. > > In htn_adapt/hinic3_htn_cmdq.c the static function > prepare_rss_indir_table_cmd_header() is called before it is defined, with no > forward declaration. This will fail with -Werror=implicit-function-declaration. [Feifei]Done. > > Patch 4/7: net/hinic3: add fun init ops to support Compact CQE > > In hinic3_rx_queue_setup(), when ci_mz allocation fails the code calls > hinic3_memzone_free(ci_mz) on a NULL pointer before jumping to the error label. > This call is either a NULL dereference or dead code and should be removed. [Feifei]Done. > > The error-path goto labels in hinic3_func_init() are misordered. > The allocation order is mac_addrs, cmdq_ops, rx_ops, tx_ops, mc_list but the > cleanup labels do not reverse this properly. For example if cmdq_ops allocation > fails, mac_addrs is never freed. > If tx_ops fails, neither cmdq_ops nor mac_addrs are freed. This leaks memory on > every init failure path. [Feifei]Done. > > The ternary in hinic3_pf_get_default_cos() assigns HINIC3_COS_NUM_MAX (8) > when NIC_F_HTN_CMDQ is set and HINIC3_COS_NUM_MAX_HTN (4) when it is > not. The naming suggests this is backwards. Please verify. [Feifei]Done > > Patch 5/7: net/hinic3: add rx ops to support Compact CQE > > hinic3_poll_integrated_cqe_rq_empty() uses __atomic_load_n() with > __ATOMIC_ACQUIRE. New code must use rte_atomic_load_explicit() with > rte_memory_order_acquire instead of the GCC built-in. [Feifei] Done. > > Patch 7/7: net/hinic3: use different callback func to support htn fdir > > Copy-paste bug in hinic3_rss_hash_update() and hinic3_rss_conf_get(). > In both functions the else branch has: > > rss_type.ipv6_ext = 0; > rss_type.ipv6_ext = 0; > > The second line should be rss_type.tcp_ipv6_ext = 0. As written tcp_ipv6_ext is > never cleared on non-HTN hardware which will produce incorrect RSS > configuration. [Feifei] Done ^ permalink raw reply [flat|nested] 80+ messages in thread
* [v6 0/7] hinic3 change for support new SPx NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (7 preceding siblings ...) 2026-03-16 15:45 ` [V2 0/7] hinic3 change for support new SPx NIC Stephen Hemminger @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (8 more replies) 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang 10 siblings, 9 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: chenyi221 From: chenyi221 <chenyi221@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports v4: --fix rss type assignment error v5: --fix community ubuntu-22.04-clang err v6: --fix atomic compilation error Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 267 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 33 files changed, 3362 insertions(+), 1442 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [V6 1/7] net/hinic3: add support for new SPx series NIC 2026-03-19 13:52 ` [v6 " Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (7 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to support Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-19 13:52 ` [v6 " Feifei Wang 2026-03-19 13:52 ` [V6 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (6 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-19 13:52 ` [v6 " Feifei Wang 2026-03-19 13:52 ` [V6 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-19 13:52 ` [V6 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (5 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 130 ++++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..780b17414a 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..c8e690981b 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +257,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..1245b9c8d8 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-19 13:52 ` [v6 " Feifei Wang ` (2 preceding siblings ...) 2026-03-19 13:52 ` [V6 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 5/7] net/hinic3: add rx " Feifei Wang ` (4 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 212 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 61 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 577 insertions(+), 436 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 780b17414a..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3387,9 +3487,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + + hinic3_nic_tx_rx_ops_init(nic_dev); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index c8e690981b..d0acba4cf4 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -268,7 +288,8 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); @@ -279,9 +300,6 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -296,4 +314,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 1245b9c8d8..ffafe39fb5 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index f8d26e9397..a40c4faa89 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-19 13:52 ` [v6 " Feifei Wang ` (3 preceding siblings ...) 2026-03-19 13:52 ` [V6 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 6/7] net/hinic3: add tx " Feifei Wang ` (3 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 242 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 ++++++++++++++++++- 3 files changed, 343 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..9e2c80f759 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,33 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +732,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +758,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +781,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +815,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +828,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +870,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +967,118 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1089,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1120,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1129,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1160,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..129c2b4a59 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + RTE_ATOMIC(uint32_t) value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-19 13:52 ` [v6 " Feifei Wang ` (4 preceding siblings ...) 2026-03-19 13:52 ` [V6 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-19 13:52 ` [V6 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang ` (2 subsequent siblings) 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 452 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 360 insertions(+), 238 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..e0ff095c04 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extended wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V6 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-19 13:52 ` [v6 " Feifei Wang ` (5 preceding siblings ...) 2026-03-19 13:52 ` [V6 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-19 13:52 ` Feifei Wang 2026-03-21 17:32 ` [v6 0/7] hinic3 change for support new SPx NIC Stephen Hemminger 2026-03-22 16:32 ` Stephen Hemminger 8 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-19 13:52 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 41 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.h | 16 - drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 8 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 8 + 12 files changed, 877 insertions(+), 340 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..adeae07f27 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,12 +2604,16 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; - + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_conf->rss_hf |= 0; + rss_conf->rss_hf |= 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index d0acba4cf4..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -277,22 +277,6 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); -/** - * Get cmdq ops software tile NIC(stn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); - -/** - * Get cmdq ops hardware tile NIC(htn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); - /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 9e2c80f759..4c12943a05 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -797,7 +800,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -813,7 +816,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1018,8 +1020,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index e0ff095c04..6b2bffb14e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index ffafe39fb5..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -52,4 +52,12 @@ struct hinic3_htn_vlan_ctx { uint16_t dest_func_id; }; +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + #endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index a40c4faa89..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -35,4 +35,12 @@ struct hinic3_stn_vlan_ctx { uint32_t vlan_sel; }; +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + #endif /* _HINIC3_STN_CMDQ_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* Re: [v6 0/7] hinic3 change for support new SPx NIC 2026-03-19 13:52 ` [v6 " Feifei Wang ` (6 preceding siblings ...) 2026-03-19 13:52 ` [V6 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang @ 2026-03-21 17:32 ` Stephen Hemminger 2026-03-22 16:32 ` Stephen Hemminger 8 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-21 17:32 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, chenyi221 On Thu, 19 Mar 2026 21:52:06 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: chenyi221 <chenyi221@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 33 files changed, 3362 insertions(+), 1442 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > > -- > 2.45.1.windows.1 Looks good, only minor feedback from AI to address. --- The V6 series fixes most issues from the V2 review: the uppercase hex `0X0229` is corrected, `rte_smp_rmb()` is replaced with `rte_atomic_thread_fence(rte_memory_order_acquire)`, the `__atomic_load_n()` GCC built-in is gone, the `hinic3_memzone_free(ci_mz)` on NULL is removed, the error-path labels in `hinic3_func_init()` are now correctly ordered, the `cos_num_max` ternary logic is fixed, and the `HINIC3_CMDQ_BUF_SIZE` change is documented. Two issues remain: --- Patch 3/7: net/hinic3: use different callback func to split new/old cmdq operations The call sites use `hinic3_cmdq_get_stn_ops()` and `hinic3_cmdq_get_htn_ops()` but the definitions in stn_adapt/ and htn_adapt/ are `hinic3_nic_cmdq_get_stn_ops()` and `hinic3_nic_cmdq_get_htn_ops()`. The header declarations also use the short names without `_nic_`. Patch 4 fixes the call sites but each commit must compile independently. Please either use consistent names from the start in patch 3 or squash the fix into this patch. Patch 7/7: net/hinic3: use different callback func to support htn fdir The copy-paste bug in `hinic3_rss_hash_update()` was fixed but the same bug persists in `hinic3_rss_conf_get()`. The else branch still has: rss_type.ipv6_ext = 0; rss_type.ipv6_ext = 0; The second line should be `rss_type.tcp_ipv6_ext = 0`. ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [v6 0/7] hinic3 change for support new SPx NIC 2026-03-19 13:52 ` [v6 " Feifei Wang ` (7 preceding siblings ...) 2026-03-21 17:32 ` [v6 0/7] hinic3 change for support new SPx NIC Stephen Hemminger @ 2026-03-22 16:32 ` Stephen Hemminger 8 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-22 16:32 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, chenyi221 On Thu, 19 Mar 2026 21:52:06 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: chenyi221 <chenyi221@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 33 files changed, 3362 insertions(+), 1442 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > > -- > 2.45.1.windows.1 > > You should also check and update the documentation. Full list found by AI. New capabilities added by the patches with no corresponding documentation: New device IDs — SP230 (0x0229/0x3750) and SP920 (0x0224) are added in patch 1, but hinic3.rst is not updated to list the new adapter models. The existing text just says "SPx series" which is vague enough to arguably cover it, but specific model names would be helpful. GENEVE tunnel TSO — Patch 4/6 adds RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO to tx_offload_capa when NIC_F_GENEVE_OFFLOAD is set. The features matrix hinic3.ini does not have a GENEVE tunnel TSO entry, nor does it list geneve under [rte_flow items]. IP-in-IP tunnel TSO — Patch 4/6 adds RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO. Not reflected in hinic3.ini. VXLAN-GPE tunnel support — Patch 6 adds HINIC3_PKT_TX_TUNNEL_VXLAN_GPE handling in the TX path. Not in hinic3.ini. Outer UDP checksum — Patch 6 adds RTE_MBUF_F_TX_OUTER_UDP_CKSUM support. Not in hinic3.ini. QinQ / double VLAN — Patch 4 adds HINIC3_PKT_TX_QINQ_PKT (mapped to RTE_MBUF_F_TX_QINQ). Not in hinic3.ini (QinQ offload is available in default.ini but not set for hinic3). Release notes — Adding support for new NIC hardware (SP230, SP920) and new offload features (GENEVE TSO, IP-in-IP TSO) should be mentioned in the current release notes file. Features already documented that are unchanged (no action needed): The existing features in hinic3.ini — RSS, TSO, LRO, VLAN filter, checksum offloads, scattered Rx, promiscuous/allmulticast, Inner L3/L4 checksum, vxlan flow item — all remain correct. The series does not remove any existing capability. Summary for mailing list: The series adds new device support (SP230, SP920) and new offload capabilities (GENEVE tunnel TSO, IP-in-IP tunnel TSO, VXLAN-GPE, outer UDP checksum, QinQ) but includes no documentation updates. Please update doc/guides/nics/features/hinic3.ini and the release notes to reflect the new hardware and offload capabilities. ^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v7 0/7] hinic3 change for support new SPx NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (8 preceding siblings ...) 2026-03-19 13:52 ` [v6 " Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (6 more replies) 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang 10 siblings, 7 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports v4: --fix rss type assignment error v5: --fix community ubuntu-22.04-clang err v6: --fix atomic compilation error v6: --fix community review comments v7: --fix htn/stn ops function name error --update doc/guides for hinic3 driver Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir doc/guides/nics/features/hinic3.ini | 11 +- doc/guides/nics/hinic3.rst | 5 +- doc/guides/rel_notes/release_26_03.rst | 8 + drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 267 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 36 files changed, 3379 insertions(+), 1445 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 19:51 ` Stephen Hemminger 2026-03-23 8:04 ` [PATCH v7 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (5 subsequent siblings) 6 siblings, 1 reply; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to support Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- doc/guides/nics/features/hinic3.ini | 11 ++++++++--- doc/guides/nics/hinic3.rst | 5 ++++- doc/guides/rel_notes/release_26_03.rst | 8 ++++++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 6 files changed, 43 insertions(+), 23 deletions(-) diff --git a/doc/guides/nics/features/hinic3.ini b/doc/guides/nics/features/hinic3.ini index bc70c887cb..74fea318ee 100644 --- a/doc/guides/nics/features/hinic3.ini +++ b/doc/guides/nics/features/hinic3.ini @@ -20,14 +20,18 @@ Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y RSS hash = Y +RSS key update = Y RSS reta update = Y +SR-IOV = Y VLAN filter = Y Flow control = Y CRC offload = Y +VLAN offload = Y +QinQ offload = P L3 checksum offload = Y L4 checksum offload = Y -Inner L3 checksum = Y -Inner L4 checksum = Y +Inner L3 checksum = P +Inner L4 checksum = P Packet type parsing = Y Basic stats = Y Extended stats = Y @@ -40,12 +44,13 @@ ARMv8 = Y [rte_flow items] any = Y -eth = Y +eth = P icmp = Y ipv4 = Y ipv6 = Y tcp = Y udp = Y vxlan = Y + [rte_flow actions] queue = Y diff --git a/doc/guides/nics/hinic3.rst b/doc/guides/nics/hinic3.rst index e10f6bb450..a6117c713f 100644 --- a/doc/guides/nics/hinic3.rst +++ b/doc/guides/nics/hinic3.rst @@ -16,16 +16,19 @@ Features - Receiver Side Scaling (RSS) - Flow filtering - Checksum offload +- VLAN/QinQ stripping and inserting - TSO offload - Promiscuous mode - Port hardware statistics +- Jumbo frames - Link state information - Link flow control - Scattered and gather for TX and RX - Allmulticast mode - MTU update - Multicast MAC filter -- Flow API +- NUMA support +- Generic Flow API - Set Link down or up - VLAN filter and VLAN offload - SR-IOV - Partially supported at this point, VFIO only diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index 3d2ed19eb8..a29c460c6a 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -81,6 +81,14 @@ New Features * Added application-initiated device reset. * Added support for receive flow steering. +* **Updated Huawei hinic3 ethernet driver.** + + * Added support for Huawei new SPx NICs, include SP230 and SP920(DPU). + * Added support for GENEVE tunnel TSO, IP-in-IP tunnel TSO of SP230. + * Added support for VXLAN-GPE CKSUM of SP620. + * Added support for tunnel packet outer UDP checksum. + * Added support for QinQ of SP620. + * **Updated Intel idpf ethernet driver.** * Added support for time sync features. diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* Re: [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC 2026-03-23 8:04 ` [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-23 19:51 ` Stephen Hemminger 0 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-23 19:51 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Mon, 23 Mar 2026 16:04:44 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst > index 3d2ed19eb8..a29c460c6a 100644 > --- a/doc/guides/rel_notes/release_26_03.rst > +++ b/doc/guides/rel_notes/release_26_03.rst > @@ -81,6 +81,14 @@ New Features > * Added application-initiated device reset. > * Added support for receive flow steering. > > +* **Updated Huawei hinic3 ethernet driver.** > + > + * Added support for Huawei new SPx NICs, include SP230 and SP920(DPU). Minor grammar note: should be "including" not "include" here. ^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v7 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v7 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-23 8:04 ` [PATCH v7 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 115 +++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 62 +++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..e39a6ecf13 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..e589deed23 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..89709efdd0 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..8235dcd0fa --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v7 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang ` (2 preceding siblings ...) 2026-03-23 8:04 ` [PATCH v7 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 5/7] net/hinic3: add rx " Feifei Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 208 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 58 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 573 insertions(+), 433 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index e39a6ecf13..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3391,6 +3491,8 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) else nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + hinic3_nic_tx_rx_ops_init(nic_dev); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index e589deed23..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -264,9 +284,6 @@ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -281,4 +298,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 89709efdd0..3dbbd53174 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index 8235dcd0fa..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v7 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang ` (3 preceding siblings ...) 2026-03-23 8:04 ` [PATCH v7 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 6/7] net/hinic3: add tx " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 242 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 ++++++++++++++++++- 3 files changed, 343 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..9e2c80f759 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,33 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +732,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +758,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +781,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +815,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +828,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +870,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +967,118 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1089,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1120,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1129,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1160,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..129c2b4a59 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + RTE_ATOMIC(uint32_t) value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v7 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang ` (4 preceding siblings ...) 2026-03-23 8:04 ` [PATCH v7 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 8:04 ` [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 452 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 360 insertions(+), 238 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..e0ff095c04 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extended wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang ` (5 preceding siblings ...) 2026-03-23 8:04 ` [PATCH v7 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-23 8:04 ` Feifei Wang 2026-03-23 19:50 ` Stephen Hemminger 6 siblings, 1 reply; 80+ messages in thread From: Feifei Wang @ 2026-03-23 8:04 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 37 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 3 +- drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- 10 files changed, 860 insertions(+), 324 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..f4eb788686 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,11 +2604,13 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 9e2c80f759..4c12943a05 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -797,7 +800,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -813,7 +816,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1018,8 +1020,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index e0ff095c04..6b2bffb14e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 3dbbd53174..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -55,7 +55,8 @@ struct hinic3_htn_vlan_ctx { /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* Re: [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-23 8:04 ` [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang @ 2026-03-23 19:50 ` Stephen Hemminger 2026-03-24 1:19 ` 回复: " wangfeifei (J) 0 siblings, 1 reply; 80+ messages in thread From: Stephen Hemminger @ 2026-03-23 19:50 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Mon, 23 Mar 2026 16:04:50 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > > + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { > + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; > + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; > + } else { > + rss_type.ipv6_ext = 0; > + rss_type.ipv6_ext = 0; > + } > + Overall AI review is good, but: One issue remains from V6: Patch 7/7: net/hinic3: use different callback func to support htn fdir The copy-paste bug in hinic3_init_rss_type() is still present. The hinic3_rss_hash_update() instance was fixed but hinic3_init_rss_type() still has: rss_type.ipv6_ext = 0; rss_type.ipv6_ext = 0; The second line should be rss_type.tcp_ipv6_ext = 0. This was reported in both the V2 and V6 reviews. ^ permalink raw reply [flat|nested] 80+ messages in thread
* 回复: [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-23 19:50 ` Stephen Hemminger @ 2026-03-24 1:19 ` wangfeifei (J) 0 siblings, 0 replies; 80+ messages in thread From: wangfeifei (J) @ 2026-03-24 1:19 UTC (permalink / raw) To: Stephen Hemminger, Feifei Wang Cc: dev@dpdk.org, zengweiliang zengweiliang, chenyi (CY) > -----邮件原件----- > 发件人: Stephen Hemminger <stephen@networkplumber.org> > 发送时间: 2026年3月24日 3:51 > 收件人: Feifei Wang <wff_light@vip.163.com> > 抄送: dev@dpdk.org; wangfeifei (J) <wangfeifei40@huawei.com> > 主题: Re: [PATCH v7 7/7] net/hinic3: use different callback func to support htn > fdir > > On Mon, 23 Mar 2026 16:04:50 +0800 > Feifei Wang <wff_light@vip.163.com> wrote: > > > > > + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { > > + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; > > + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; > > + } else { > > + rss_type.ipv6_ext = 0; > > + rss_type.ipv6_ext = 0; > > + } > > + > > Overall AI review is good, but: > > One issue remains from V6: > Patch 7/7: net/hinic3: use different callback func to support htn fdir The > copy-paste bug in hinic3_init_rss_type() is still present. > The hinic3_rss_hash_update() instance was fixed but > hinic3_init_rss_type() still has: > > rss_type.ipv6_ext = 0; > rss_type.ipv6_ext = 0; > > The second line should be rss_type.tcp_ipv6_ext = 0. > This was reported in both the V2 and V6 reviews. [Feifei] Sorry to ignore this, we will fix this in the next version. ^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (9 preceding siblings ...) 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (10 more replies) 10 siblings, 11 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports v4: --fix rss type assignment error v5: --fix community ubuntu-22.04-clang err v6: --fix atomic compilation error v6: --fix community review comments v7: --fix htn/stn ops function name error --update doc/guides for hinic3 driver v8: --fix guides grammar issue --fix rss_type.ipv6_ext = 0 error Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir doc/guides/nics/features/hinic3.ini | 11 +- doc/guides/nics/hinic3.rst | 5 +- doc/guides/rel_notes/release_26_03.rst | 8 + drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 267 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 36 files changed, 3379 insertions(+), 1445 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [PATCH v8 1/7] net/hinic3: add support for new SPx series NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (9 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to support Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- doc/guides/nics/features/hinic3.ini | 11 ++++++++--- doc/guides/nics/hinic3.rst | 5 ++++- doc/guides/rel_notes/release_26_03.rst | 8 ++++++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 6 files changed, 43 insertions(+), 23 deletions(-) diff --git a/doc/guides/nics/features/hinic3.ini b/doc/guides/nics/features/hinic3.ini index bc70c887cb..74fea318ee 100644 --- a/doc/guides/nics/features/hinic3.ini +++ b/doc/guides/nics/features/hinic3.ini @@ -20,14 +20,18 @@ Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y RSS hash = Y +RSS key update = Y RSS reta update = Y +SR-IOV = Y VLAN filter = Y Flow control = Y CRC offload = Y +VLAN offload = Y +QinQ offload = P L3 checksum offload = Y L4 checksum offload = Y -Inner L3 checksum = Y -Inner L4 checksum = Y +Inner L3 checksum = P +Inner L4 checksum = P Packet type parsing = Y Basic stats = Y Extended stats = Y @@ -40,12 +44,13 @@ ARMv8 = Y [rte_flow items] any = Y -eth = Y +eth = P icmp = Y ipv4 = Y ipv6 = Y tcp = Y udp = Y vxlan = Y + [rte_flow actions] queue = Y diff --git a/doc/guides/nics/hinic3.rst b/doc/guides/nics/hinic3.rst index e10f6bb450..a6117c713f 100644 --- a/doc/guides/nics/hinic3.rst +++ b/doc/guides/nics/hinic3.rst @@ -16,16 +16,19 @@ Features - Receiver Side Scaling (RSS) - Flow filtering - Checksum offload +- VLAN/QinQ stripping and inserting - TSO offload - Promiscuous mode - Port hardware statistics +- Jumbo frames - Link state information - Link flow control - Scattered and gather for TX and RX - Allmulticast mode - MTU update - Multicast MAC filter -- Flow API +- NUMA support +- Generic Flow API - Set Link down or up - VLAN filter and VLAN offload - SR-IOV - Partially supported at this point, VFIO only diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index 3d2ed19eb8..cbc71af8f0 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -81,6 +81,14 @@ New Features * Added application-initiated device reset. * Added support for receive flow steering. +* **Updated Huawei hinic3 ethernet driver.** + + * Added support for Huawei new SPx NICs, including SP230 and SP920(DPU). + * Added support for GENEVE tunnel TSO, IP-in-IP tunnel TSO of SP230. + * Added support for VXLAN-GPE CKSUM of SP620. + * Added support for tunnel packet outer UDP checksum. + * Added support for QinQ of SP620. + * **Updated Intel idpf ethernet driver.** * Added support for time sync features. diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-24 1:55 ` [PATCH v8 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (8 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-24 1:55 ` [PATCH v8 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-24 1:55 ` [PATCH v8 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (7 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 115 +++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 62 +++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..e39a6ecf13 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..e589deed23 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..89709efdd0 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..8235dcd0fa --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (2 preceding siblings ...) 2026-03-24 1:55 ` [PATCH v8 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 5/7] net/hinic3: add rx " Feifei Wang ` (6 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 208 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 58 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 573 insertions(+), 433 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index e39a6ecf13..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3391,6 +3491,8 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) else nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + hinic3_nic_tx_rx_ops_init(nic_dev); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index e589deed23..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -264,9 +284,6 @@ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -281,4 +298,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 89709efdd0..3dbbd53174 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index 8235dcd0fa..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 preceding siblings ...) 2026-03-24 1:55 ` [PATCH v8 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 6/7] net/hinic3: add tx " Feifei Wang ` (5 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 242 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 ++++++++++++++++++- 3 files changed, 343 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..9e2c80f759 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,33 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +732,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +758,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +781,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +815,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +828,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +870,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +967,118 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1089,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1120,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1129,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1160,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..129c2b4a59 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + RTE_ATOMIC(uint32_t) value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (4 preceding siblings ...) 2026-03-24 1:55 ` [PATCH v8 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 1:55 ` [PATCH v8 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang ` (4 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 452 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 360 insertions(+), 238 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..e0ff095c04 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extended wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH v8 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (5 preceding siblings ...) 2026-03-24 1:55 ` [PATCH v8 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-24 1:55 ` Feifei Wang 2026-03-24 3:27 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Stephen Hemminger ` (3 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-24 1:55 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 37 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 3 +- drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- 10 files changed, 860 insertions(+), 324 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..f4eb788686 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,11 +2604,13 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 9e2c80f759..ed753ba4ec 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -797,7 +800,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -813,7 +816,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1018,8 +1020,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index e0ff095c04..6b2bffb14e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 3dbbd53174..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -55,7 +55,8 @@ struct hinic3_htn_vlan_ctx { /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (6 preceding siblings ...) 2026-03-24 1:55 ` [PATCH v8 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang @ 2026-03-24 3:27 ` Stephen Hemminger 2026-03-24 3:31 ` Stephen Hemminger ` (2 subsequent siblings) 10 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-24 3:27 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Tue, 24 Mar 2026 09:55:03 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: Feifei Wang <wangfeifei40@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > v6: > --fix community review comments > > v7: > --fix htn/stn ops function name error > --update doc/guides for hinic3 driver > > v8: > --fix guides grammar issue > --fix rss_type.ipv6_ext = 0 error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > doc/guides/nics/features/hinic3.ini | 11 +- > doc/guides/nics/hinic3.rst | 5 +- > doc/guides/rel_notes/release_26_03.rst | 8 + > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 36 files changed, 3379 insertions(+), 1445 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > The driver is clean but as a followup could you consider changing the code to only use rte_zmalloc() where needed. The regular glibc has more protections and is faster at alloc/free. The only reason to use rte_malloc is when hugepages are needed for sharing, DMA or access performance. For example: this function should just use calloc(). int hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; uint32_t *indir_tbl; int err; indir_tbl = rte_zmalloc(NULL, HINIC3_RSS_INDIR_SIZE * sizeof(uint32_t), 0); if (!indir_tbl) { PMD_DRV_LOG(ERR, "Alloc indir_tbl mem failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); return -ENOMEM; } /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto out; } out: rte_free(indir_tbl); return err; } ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (7 preceding siblings ...) 2026-03-24 3:27 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Stephen Hemminger @ 2026-03-24 3:31 ` Stephen Hemminger 2026-03-24 14:41 ` Stephen Hemminger 2026-03-24 14:42 ` Stephen Hemminger 10 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-24 3:31 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Tue, 24 Mar 2026 09:55:03 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: Feifei Wang <wangfeifei40@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > v6: > --fix community review comments > > v7: > --fix htn/stn ops function name error > --update doc/guides for hinic3 driver > > v8: > --fix guides grammar issue > --fix rss_type.ipv6_ext = 0 error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > doc/guides/nics/features/hinic3.ini | 11 +- > doc/guides/nics/hinic3.rst | 5 +- > doc/guides/rel_notes/release_26_03.rst | 8 + > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 36 files changed, 3379 insertions(+), 1445 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > Here is list of places identified by AI that use rte_zmalloc but may not need to. Places where rte_zmalloc is used but not needed in hinic3 driver. rte_zmalloc allocates from hugepage memory. It should only be used when the memory will be accessed by DMA, shared between primary and secondary DPDK processes, or requires specific NUMA node placement. For ordinary control-plane data structures, standard calloc/free is faster and does not consume limited hugepage resources. The following allocations are pure software bookkeeping and do not require hugepage memory: Temporary buffers (allocated and freed in the same function): hinic3_rx.c: hinic3_refill_indir_rqid() - indir_tbl array hinic3_nic_cfg.c: hinic3_get_phy_port_stats() - port_stats struct Command queue bookkeeping: hinic3_cmdq.c: hinic3_alloc_cmd_buf() - cmd_buf wrapper struct hinic3_cmdq.c: init_cmdq() - cmdq->errcode array hinic3_cmdq.c: init_cmdq() - cmdq->cmd_infos array hinic3_cmdq.c: hinic3_init_cmdqs() - cmdqs struct hinic3_cmdq.c: hinic3_init_cmdqs() - cmdqs->saved_wqs array Event queue tracking arrays (track DMA pages but are not DMA'd): hinic3_eqs.c: alloc_eq_pages() - eq->dma_addr array hinic3_eqs.c: alloc_eq_pages() - eq->virt_addr array hinic3_eqs.c: alloc_eq_pages() - eq->eq_mz array hinic3_eqs.c: hinic3_aeqs_init() - aeqs struct Mailbox and management channel buffers: hinic3_mbox.c: init_mbox_info() - mbox_info->mbox buffer hinic3_mbox.c: init_mbox_info() - mbox_info->buf_out buffer hinic3_mbox.c: hinic3_func_to_func_init() - func_to_func struct hinic3_mgmt.c: alloc_recv_msg() - recv_msg->msg buffer (x2) hinic3_mgmt.c: alloc_msg_buf() - mgmt_ack_buf buffer hinic3_mgmt.c: hinic3_pf_to_mgmt_init() - pf_to_mgmt struct Configuration and control structs: hinic3_hw_cfg.c: init_cfg_mgmt() - cfg_mgmt struct hinic3_hwdev.c: hinic3_init_comm_ch() - chip_fault_stats buffer hinic3_hwif.c: hinic3_hwif_res_init() - hwif struct hinic3_ethdev.c: hinic3_init_sw_rxtxqs() - txqs pointer array hinic3_ethdev.c: hinic3_init_sw_rxtxqs() - rxqs pointer array hinic3_ethdev.c: hinic3_func_init() - hwdev struct hinic3_ethdev.c: hinic3_func_init() - mc_list MAC address array hinic3_ethdev.c: hinic3_enable_interrupt() - intr_vec array Flow director and flow API: hinic3_fdir.c: hinic3_alloc_dynamic_block_resource() - tcam block hinic3_fdir.c: hinic3_add_tcam_filter() - tcam_filter struct hinic3_flow.c: hinic3_flow_create() - filter_rules struct hinic3_flow.c: hinic3_flow_create() - flow struct New allocations added by this patch series: hinic3_ethdev.c: hinic3_func_init() - cmdq_ops table hinic3_ethdev.c: hinic3_func_init() - rx_ops table hinic3_ethdev.c: hinic3_func_init() - tx_ops table Note: eth_dev->data->mac_addrs must remain rte_zmalloc because the ethdev framework frees it with rte_free. The rte_zmalloc_socket calls for rxq, txq, rx_info, and tx_info are correct because those are hot-path queue structures that benefit from NUMA-local placement. ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (8 preceding siblings ...) 2026-03-24 3:31 ` Stephen Hemminger @ 2026-03-24 14:41 ` Stephen Hemminger 2026-03-24 14:42 ` Stephen Hemminger 10 siblings, 0 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-24 14:41 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Tue, 24 Mar 2026 09:55:03 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: Feifei Wang <wangfeifei40@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > v6: > --fix community review comments > > v7: > --fix htn/stn ops function name error > --update doc/guides for hinic3 driver > > v8: > --fix guides grammar issue > --fix rss_type.ipv6_ext = 0 error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > doc/guides/nics/features/hinic3.ini | 11 +- > doc/guides/nics/hinic3.rst | 5 +- > doc/guides/rel_notes/release_26_03.rst | 8 + > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 36 files changed, 3379 insertions(+), 1445 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > Applied to next-net ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (9 preceding siblings ...) 2026-03-24 14:41 ` Stephen Hemminger @ 2026-03-24 14:42 ` Stephen Hemminger 2026-03-25 1:30 ` 回复: " wangfeifei (J) 2026-03-25 13:37 ` Thomas Monjalon 10 siblings, 2 replies; 80+ messages in thread From: Stephen Hemminger @ 2026-03-24 14:42 UTC (permalink / raw) To: Feifei Wang; +Cc: dev, Feifei Wang On Tue, 24 Mar 2026 09:55:03 +0800 Feifei Wang <wff_light@vip.163.com> wrote: > From: Feifei Wang <wangfeifei40@huawei.com> > > Change hinic3 driver to support Huawei new SPx series NIC. > > v2: > --fix build issues > > v3: > --fix community review comments and err reports > > v4: > --fix rss type assignment error > > v5: > --fix community ubuntu-22.04-clang err > > v6: > --fix atomic compilation error > > v6: > --fix community review comments > > v7: > --fix htn/stn ops function name error > --update doc/guides for hinic3 driver > > v8: > --fix guides grammar issue > --fix rss_type.ipv6_ext = 0 error > > Feifei Wang (7): > net/hinic3: add support for new SPx series NIC > net/hinic3: add enhance cmdq support for new SPx series NIC > net/hinic3: use different callback func to split new/old cmdq > operations > net/hinic3: add fun init ops to support Compact CQE > net/hinic3: add rx ops to support Compact CQE > net/hinic3: add tx ops to support Compact CQE > net/hinic3: use different callback func to support htn fdir > > doc/guides/nics/features/hinic3.ini | 11 +- > doc/guides/nics/hinic3.rst | 5 +- > doc/guides/rel_notes/release_26_03.rst | 8 + > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > drivers/net/hinic3/base/meson.build | 1 + > drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > drivers/net/hinic3/hinic3_tx.h | 154 +++- > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > drivers/net/hinic3/htn_adapt/meson.build | 7 + > drivers/net/hinic3/meson.build | 8 +- > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > drivers/net/hinic3/stn_adapt/meson.build | 7 + > 36 files changed, 3379 insertions(+), 1445 deletions(-) > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > When merged did small changes to Subject to resolve check-git-log warnings. ^ permalink raw reply [flat|nested] 80+ messages in thread
* 回复: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 14:42 ` Stephen Hemminger @ 2026-03-25 1:30 ` wangfeifei (J) 2026-03-25 13:37 ` Thomas Monjalon 1 sibling, 0 replies; 80+ messages in thread From: wangfeifei (J) @ 2026-03-25 1:30 UTC (permalink / raw) To: Stephen Hemminger, Feifei Wang Cc: dev@dpdk.org, zengweiliang zengweiliang, chenyi (CY) > -----邮件原件----- > 发件人: Stephen Hemminger <stephen@networkplumber.org> > 发送时间: 2026年3月24日 22:43 > 收件人: Feifei Wang <wff_light@vip.163.com> > 抄送: dev@dpdk.org; wangfeifei (J) <wangfeifei40@huawei.com> > 主题: Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC > > On Tue, 24 Mar 2026 09:55:03 +0800 > Feifei Wang <wff_light@vip.163.com> wrote: > > > From: Feifei Wang <wangfeifei40@huawei.com> > > > > Change hinic3 driver to support Huawei new SPx series NIC. > > > > v2: > > --fix build issues > > > > v3: > > --fix community review comments and err reports > > > > v4: > > --fix rss type assignment error > > > > v5: > > --fix community ubuntu-22.04-clang err > > > > v6: > > --fix atomic compilation error > > > > v6: > > --fix community review comments > > > > v7: > > --fix htn/stn ops function name error > > --update doc/guides for hinic3 driver > > > > v8: > > --fix guides grammar issue > > --fix rss_type.ipv6_ext = 0 error > > > > Feifei Wang (7): > > net/hinic3: add support for new SPx series NIC > > net/hinic3: add enhance cmdq support for new SPx series NIC > > net/hinic3: use different callback func to split new/old cmdq > > operations > > net/hinic3: add fun init ops to support Compact CQE > > net/hinic3: add rx ops to support Compact CQE > > net/hinic3: add tx ops to support Compact CQE > > net/hinic3: use different callback func to support htn fdir > > > > doc/guides/nics/features/hinic3.ini | 11 +- > > doc/guides/nics/hinic3.rst | 5 +- > > doc/guides/rel_notes/release_26_03.rst | 8 + > > drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- > > drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ > > drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- > > drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ > > drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ > > drivers/net/hinic3/base/hinic3_csr.h | 18 +- > > drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- > > drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- > > drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- > > drivers/net/hinic3/base/hinic3_hwdev.h | 18 + > > drivers/net/hinic3/base/hinic3_hwif.c | 10 +- > > drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- > > drivers/net/hinic3/base/hinic3_mgmt.h | 2 + > > drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- > > drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- > > drivers/net/hinic3/base/meson.build | 1 + > > drivers/net/hinic3/hinic3_ethdev.c | 275 ++++++-- > > drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- > > drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- > > drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- > > drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- > > drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- > > drivers/net/hinic3/hinic3_rx.c | 267 +++++-- > > drivers/net/hinic3/hinic3_rx.h | 182 ++++- > > drivers/net/hinic3/hinic3_tx.c | 456 ++++++------ > > drivers/net/hinic3/hinic3_tx.h | 154 +++- > > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ > > .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ > > drivers/net/hinic3/htn_adapt/meson.build | 7 + > > drivers/net/hinic3/meson.build | 8 +- > > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ > > .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ > > drivers/net/hinic3/stn_adapt/meson.build | 7 + > > 36 files changed, 3379 insertions(+), 1445 deletions(-) create mode > > 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c > > create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h > > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c > > create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h > > create mode 100644 drivers/net/hinic3/htn_adapt/meson.build > > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c > > create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h > > create mode 100644 drivers/net/hinic3/stn_adapt/meson.build > > > > When merged did small changes to Subject to resolve check-git-log warnings. [Feifei] Thanks for your reviewing. rte_zmalloc clean we are doing, after this patch applied, we will upload rte_zmalloc clean patch. ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-24 14:42 ` Stephen Hemminger 2026-03-25 1:30 ` 回复: " wangfeifei (J) @ 2026-03-25 13:37 ` Thomas Monjalon 2026-03-25 14:02 ` Thomas Monjalon 1 sibling, 1 reply; 80+ messages in thread From: Thomas Monjalon @ 2026-03-25 13:37 UTC (permalink / raw) To: Feifei Wang, Stephen Hemminger; +Cc: Feifei Wang, dev 24/03/2026 15:42, Stephen Hemminger: > On Tue, 24 Mar 2026 09:55:03 +0800 > Feifei Wang <wff_light@vip.163.com> wrote: > > > From: Feifei Wang <wangfeifei40@huawei.com> > > > > Change hinic3 driver to support Huawei new SPx series NIC. > > When merged did small changes to Subject to resolve check-git-log > warnings. Compilation is broken in the middle of this series. I'm checking if that's something which can be repaired. ^ permalink raw reply [flat|nested] 80+ messages in thread
* Re: [PATCH v8 0/7] hinic3 change for support new SPx NIC 2026-03-25 13:37 ` Thomas Monjalon @ 2026-03-25 14:02 ` Thomas Monjalon 0 siblings, 0 replies; 80+ messages in thread From: Thomas Monjalon @ 2026-03-25 14:02 UTC (permalink / raw) To: Feifei Wang, Stephen Hemminger, Feifei Wang; +Cc: dev, dev 25/03/2026 14:37, Thomas Monjalon: > 24/03/2026 15:42, Stephen Hemminger: > > On Tue, 24 Mar 2026 09:55:03 +0800 > > Feifei Wang <wff_light@vip.163.com> wrote: > > > > > From: Feifei Wang <wangfeifei40@huawei.com> > > > > > > Change hinic3 driver to support Huawei new SPx series NIC. > > > > When merged did small changes to Subject to resolve check-git-log > > warnings. > > Compilation is broken in the middle of this series. > I'm checking if that's something which can be repaired. It looks really not ready to be built in the middle of the series. We should not have accepted this series, it is breaking git bisect. I will let it pass because I'm late. ^ permalink raw reply [flat|nested] 80+ messages in thread
* [v3 0/7] hinic3 change for support new SPx NIC 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (6 more replies) 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang 3 siblings, 7 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: chenyi221 From: chenyi221 <chenyi221@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 265 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 33 files changed, 3361 insertions(+), 1443 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [V3 1/7] net/hinic3: add support for new SPx series NIC 2026-03-18 2:19 ` [v3 " Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to suuport Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-18 2:19 ` [v3 " Feifei Wang 2026-03-18 2:19 ` [V3 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-18 2:19 ` [v3 " Feifei Wang 2026-03-18 2:19 ` [V3 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 2:19 ` [V3 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 130 ++++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..780b17414a 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..c8e690981b 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +257,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..1245b9c8d8 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-18 2:19 ` [v3 " Feifei Wang ` (2 preceding siblings ...) 2026-03-18 2:19 ` [V3 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 5/7] net/hinic3: add rx " Feifei Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 212 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 61 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 577 insertions(+), 436 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 780b17414a..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3387,9 +3487,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + + hinic3_nic_tx_rx_ops_init(nic_dev); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index c8e690981b..d0acba4cf4 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -268,7 +288,8 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); @@ -279,9 +300,6 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -296,4 +314,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 1245b9c8d8..ffafe39fb5 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index f8d26e9397..a40c4faa89 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-18 2:19 ` [v3 " Feifei Wang ` (3 preceding siblings ...) 2026-03-18 2:19 ` [V3 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 2:19 ` [V3 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 240 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 +++++++++++++++++++- 3 files changed, 341 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..363f3f56c8 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,32 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +731,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +757,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +780,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +814,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +827,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +869,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +966,117 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1087,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1118,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1127,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1158,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..2655802467 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + uint32_t value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-18 2:19 ` [v3 " Feifei Wang ` (4 preceding siblings ...) 2026-03-18 2:19 ` [V3 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 2026-03-18 2:19 ` [V3 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 454 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 361 insertions(+), 239 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..fca94dd08e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extented wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + /* Task or bd section maybe warpped for one wqe. */ + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V3 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-18 2:19 ` [v3 " Feifei Wang ` (5 preceding siblings ...) 2026-03-18 2:19 ` [V3 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-18 2:19 ` Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 2:19 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 41 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.h | 16 - drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 8 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 8 + 12 files changed, 877 insertions(+), 340 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..35f9e2ef8c 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_type.tcp_ipv6_ext = rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,12 +2604,16 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; - + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_conf->rss_hf |= 0; + rss_conf->rss_hf |= 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index d0acba4cf4..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -277,22 +277,6 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); -/** - * Get cmdq ops software tile NIC(stn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); - -/** - * Get cmdq ops hardware tile NIC(htn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); - /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 363f3f56c8..1a9a88204f 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -796,7 +799,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -812,7 +815,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1016,8 +1018,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index fca94dd08e..1a864d0775 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index ffafe39fb5..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -52,4 +52,12 @@ struct hinic3_htn_vlan_ctx { uint16_t dest_func_id; }; +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + #endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index a40c4faa89..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -35,4 +35,12 @@ struct hinic3_stn_vlan_ctx { uint32_t vlan_sel; }; +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + #endif /* _HINIC3_STN_CMDQ_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [v4 0/7] hinic3 change for support new SPx NIC 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 2:19 ` [v3 " Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (6 more replies) 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang 3 siblings, 7 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: chenyi221 From: chenyi221 <chenyi221@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports v4: --fix rss type assignment error Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 265 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 33 files changed, 3361 insertions(+), 1443 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [V4 1/7] net/hinic3: add support for new SPx series NIC 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to suuport Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 6:20 ` [V4 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 6:20 ` [V4 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 6:20 ` [V4 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 130 ++++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..780b17414a 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..c8e690981b 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +257,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..1245b9c8d8 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (2 preceding siblings ...) 2026-03-18 6:20 ` [V4 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 5/7] net/hinic3: add rx " Feifei Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 212 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 61 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 577 insertions(+), 436 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 780b17414a..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3387,9 +3487,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + + hinic3_nic_tx_rx_ops_init(nic_dev); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index c8e690981b..d0acba4cf4 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -268,7 +288,8 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); @@ -279,9 +300,6 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -296,4 +314,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 1245b9c8d8..ffafe39fb5 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index f8d26e9397..a40c4faa89 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 preceding siblings ...) 2026-03-18 6:20 ` [V4 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 6:20 ` [V4 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 240 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 +++++++++++++++++++- 3 files changed, 341 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..363f3f56c8 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,32 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +731,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +757,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +780,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +814,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +827,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +869,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +966,117 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1087,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1118,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1127,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1158,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..2655802467 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + uint32_t value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (4 preceding siblings ...) 2026-03-18 6:20 ` [V4 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 2026-03-18 6:20 ` [V4 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 454 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 361 insertions(+), 239 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..fca94dd08e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extented wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + /* Task or bd section maybe warpped for one wqe. */ + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V4 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (5 preceding siblings ...) 2026-03-18 6:20 ` [V4 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-18 6:20 ` Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 6:20 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 41 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.h | 16 - drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 8 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 8 + 12 files changed, 877 insertions(+), 340 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..adeae07f27 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,12 +2604,16 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; - + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_conf->rss_hf |= 0; + rss_conf->rss_hf |= 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index d0acba4cf4..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -277,22 +277,6 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); -/** - * Get cmdq ops software tile NIC(stn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); - -/** - * Get cmdq ops hardware tile NIC(htn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); - /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 363f3f56c8..1a9a88204f 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -796,7 +799,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -812,7 +815,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1016,8 +1018,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index fca94dd08e..1a864d0775 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index ffafe39fb5..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -52,4 +52,12 @@ struct hinic3_htn_vlan_ctx { uint16_t dest_func_id; }; +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + #endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index a40c4faa89..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -35,4 +35,12 @@ struct hinic3_stn_vlan_ctx { uint32_t vlan_sel; }; +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + #endif /* _HINIC3_STN_CMDQ_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [v5 0/7] hinic3 change for support new SPx NIC 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (2 preceding siblings ...) 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-18 12:31 ` Feifei Wang 2026-03-18 12:31 ` [V5 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang ` (6 more replies) 3 siblings, 7 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:31 UTC (permalink / raw) To: dev; +Cc: chenyi221 From: chenyi221 <chenyi221@huawei.com> Change hinic3 driver to support Huawei new SPx series NIC. v2: --fix build issues v3: --fix community review comments and err reports v4: --fix rss type assignment error v5 --fix community ubuntu-22.04-clang err Feifei Wang (7): net/hinic3: add support for new SPx series NIC net/hinic3: add enhance cmdq support for new SPx series NIC net/hinic3: use different callback func to split new/old cmdq operations net/hinic3: add fun init ops to support Compact CQE net/hinic3: add rx ops to support Compact CQE net/hinic3: add tx ops to support Compact CQE net/hinic3: use different callback func to support htn fdir drivers/net/hinic3/base/hinic3_cmd.h | 80 ++- drivers/net/hinic3/base/hinic3_cmdq.c | 370 ++++------ drivers/net/hinic3/base/hinic3_cmdq.h | 112 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 +++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++ drivers/net/hinic3/base/hinic3_csr.h | 18 +- drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_hwif.c | 10 +- drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 182 ++--- drivers/net/hinic3/base/hinic3_nic_cfg.h | 98 ++- drivers/net/hinic3/base/meson.build | 1 + drivers/net/hinic3/hinic3_ethdev.c | 279 ++++++-- drivers/net/hinic3/hinic3_ethdev.h | 120 ++-- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++-------- drivers/net/hinic3/hinic3_nic_io.h | 163 ++++- drivers/net/hinic3/hinic3_rx.c | 266 +++++-- drivers/net/hinic3/hinic3_rx.h | 182 ++++- drivers/net/hinic3/hinic3_tx.c | 458 ++++++------ drivers/net/hinic3/hinic3_tx.h | 154 +++- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 167 +++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 63 ++ drivers/net/hinic3/htn_adapt/meson.build | 7 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 151 ++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 46 ++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 33 files changed, 3362 insertions(+), 1443 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build -- 2.45.1.windows.1 ^ permalink raw reply [flat|nested] 80+ messages in thread
* [V5 1/7] net/hinic3: add support for new SPx series NIC 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang @ 2026-03-18 12:31 ` Feifei Wang 2026-03-18 12:31 ` [V5 2/7] net/hinic3: add enhance cmdq " Feifei Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:31 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add new device id to suuport Huawei new SPx series Network Adapters. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_csr.h | 18 +++++++++--------- drivers/net/hinic3/base/hinic3_hwif.c | 10 +++++++--- drivers/net/hinic3/hinic3_ethdev.c | 14 +++++++------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h index 94b10601c4..eceb34e9fd 100644 --- a/drivers/net/hinic3/base/hinic3_csr.h +++ b/drivers/net/hinic3/base/hinic3_csr.h @@ -5,15 +5,15 @@ #ifndef _HINIC3_CSR_H_ #define _HINIC3_CSR_H_ -#ifdef CONFIG_SP_VID_DID -#define PCI_VENDOR_ID_SPNIC 0x1F3F -#define HINIC3_DEV_ID_STANDARD 0x9020 -#define HINIC3_DEV_ID_VF 0x9001 -#else -#define PCI_VENDOR_ID_HUAWEI 0x19e5 -#define HINIC3_DEV_ID_STANDARD 0x0222 -#define HINIC3_DEV_ID_VF 0x375F -#endif +#define PCI_VENDOR_ID_HUAWEI 0x19e5 + +#define HINIC3_DEV_ID_SP620 0x0222 +#define HINIC3_DEV_ID_VF_SP620 0x375F + +#define HINIC3_DEV_ID_SP230 0x0229 +#define HINIC3_DEV_ID_VF_SP230 0x3750 + +#define HINIC3_DEV_ID_SP920 0x0224 /* * Bit30/bit31 for bar index flag. diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c index 080254bf44..c82b223fa0 100644 --- a/drivers/net/hinic3/base/hinic3_hwif.c +++ b/drivers/net/hinic3/base/hinic3_hwif.c @@ -138,7 +138,11 @@ #define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MASK)) -#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF) +static inline bool hinic3_is_vf_dev(const struct rte_pci_device *pdev) +{ + return pdev->id.device_id == HINIC3_DEV_ID_VF_SP620 || + pdev->id.device_id == HINIC3_DEV_ID_VF_SP230; +} uint32_t hinic3_hwif_read_reg(struct hinic3_hwif *hwif, uint32_t reg) @@ -552,7 +556,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) void *db_base = NULL; int cfg_bar; - cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR + cfg_bar = hinic3_is_vf_dev(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR; cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr; @@ -561,7 +565,7 @@ hinic3_get_bar_addr(struct hinic3_hwdev *hwdev) "mem_resource addr is null, cfg_regs_base is NULL"); return -EFAULT; } - if (!HINIC3_IS_VF_DEV(pci_dev)) { + if (!hinic3_is_vf_dev(pci_dev)) { mgmt_reg_base = pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr; if (mgmt_reg_base == NULL) { diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 0f72728a95..da2d6722d2 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -3521,13 +3521,13 @@ hinic3_dev_uninit(struct rte_eth_dev *dev) } static const struct rte_pci_id pci_id_hinic3_map[] = { -#ifdef CONFIG_SP_VID_DID - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)}, -#else - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)}, - {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)}, -#endif + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP620)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP620)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP230)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF_SP230)}, + + {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_SP920)}, {.vendor_id = 0}, }; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 12:31 ` [V5 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-03-18 12:31 ` Feifei Wang 2026-03-18 12:31 ` [V5 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:31 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue.HINIC3_CMDQ_BUF_SIZE changed from 2048 to 1024 to adapt to the two types of NICs. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 80 ++-- drivers/net/hinic3/base/hinic3_cmdq.c | 370 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 112 +++++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 111 ++++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 125 ++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 77 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 627 insertions(+), 333 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..f2d5d47522 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -23,14 +23,21 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, - HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, + HINIC3_UCODE_CMD_ARM_SQ, + HINIC3_UCODE_CMD_ARM_RQ, + HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, + HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, + HINIC3_UCODE_CMD_SET_IQ_ENABLE, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_MODIFY_VLAN_CTX, }; /* Commands between NIC to MPU. */ @@ -51,6 +58,12 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_RX_LRO = 13, HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ HINIC3_NIC_CMD_GET_MAC = 20, HINIC3_NIC_CMD_SET_MAC = 21, @@ -59,6 +72,10 @@ enum hinic3_nic_cmd { HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ HINIC3_NIC_CMD_RSS_CFG = 60, HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, @@ -89,6 +106,7 @@ enum hinic3_mgmt_cmd { HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 26, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, @@ -97,39 +115,39 @@ enum hinic3_mgmt_cmd { }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..9c27c6f54c 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,182 +333,94 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void +cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, uint8_t mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, uint16_t curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? + WQE_LCMDQ_SIZE : WQE_ENHANCE_CMDQ_SIZE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + memset(&wqe, 0, (uint32_t)wqe_size); - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } - - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + hinic3_enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); + /* The data written to HW should be in Big Endian Format */ hinic3_cpu_to_hw(&wqe, wqe_size); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; - -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); - - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, uint8_t cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + uint64_t *out_param, uint32_t timeout, + enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; struct hinic3_cmdq_wqe *curr_wqe = NULL; uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; + uint32_t time; + uint64_t *direct_resp = NULL; + int err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { + if (!curr_wqe) { err = -EBUSY; goto cmdq_unlock; } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, + curr_wqe, curr_prod_idx, nic_cmd_type); + + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; next_prod_idx = curr_prod_idx + num_wqebbs; if (next_prod_idx >= wq->q_depth) { cmdq->wrapped = !cmdq->wrapped; next_prod_idx -= wq->q_depth; } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; - - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); + time = msecs_to_cycles(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); err = -ETIMEDOUT; goto cmdq_unlock; } - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + rte_atomic_thread_fence(rte_memory_order_acquire); /* Read error code after completion */ + + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = + (uint64_t *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (uint64_t *) + (&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); + + *out_param = rte_cpu_to_be_64(*direct_resp); + } if (cmdq->errcode[curr_prod_idx]) err = cmdq->errcode[curr_prod_idx]; @@ -588,7 +464,8 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) + struct hinic3_cmd_buf *buf_in, + uint64_t *out_param, uint32_t timeout) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; int err; @@ -605,8 +482,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +505,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +520,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +614,28 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + uint16_t cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + cmdq_ctxt.enhance_ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt; + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), &cmdq_ctxt, &out_size); - if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +676,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +684,11 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], + &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + hinic3_enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,11 +707,12 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE]; + uint32_t wqebb_shift; int err; cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0); @@ -835,6 +722,14 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -844,8 +739,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE); - snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", - hwdev->port_id); + snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u", hwdev->port_id); cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name, HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0, @@ -857,8 +751,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, - HINIC3_CMDQ_DEPTH); + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); goto cmdq_alloc_err; @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdqs_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..b31b61029e 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -13,25 +13,55 @@ /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; + +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,17 +74,63 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { HINIC3_ACK_TYPE_CMDQ = 0, HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 }; +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + /* Cmdq wqe ctrls. */ struct hinic3_cmdq_header { uint32_t header_info; @@ -126,6 +202,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -142,8 +219,10 @@ struct hinic3_cmd_cmdq_ctxt { uint16_t func_idx; uint8_t cmdq_id; uint8_t rsvd1[5]; - - struct hinic3_cmdq_ctxt_info ctxt_info; + union { + struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; + }; }; enum hinic3_cmdq_status { @@ -173,8 +252,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +269,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +297,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..e09597c9f3 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void +hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void +enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, + struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..8de0ae4d71 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +void hinic3_enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void hinic3_enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..42ff04ee9d 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..5d12cf7b5f 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdqs_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..ac44da46c2 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,46 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hwdev == NULL) + return -EINVAL; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1752,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..729980d087 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c', 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 12:31 ` [V5 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 12:31 ` [V5 2/7] net/hinic3: add enhance cmdq " Feifei Wang @ 2026-03-18 12:31 ` Feifei Wang 2026-03-18 12:31 ` [V5 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:31 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 79 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 130 ++++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 161 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 145 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 +++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 618 insertions(+), 73 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index ac44da46c2..22caac0457 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -442,6 +443,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -451,6 +453,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1159,13 +1162,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1177,31 +1179,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf, indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1214,22 +1213,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1477,7 +1463,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..06d5bc7d1b 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,17 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +35,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +326,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -670,12 +686,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd1[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +914,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1250,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1263,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index da2d6722d2..780b17414a 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2577,8 +2579,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2626,8 +2627,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2648,8 +2648,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3387,6 +3386,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..c8e690981b 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,119 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +enum hinic3_qp_ctxt_type { + HINIC3_QP_CTXT_TYPE_SQ, + HINIC3_QP_CTXT_TYPE_RQ, +}; + +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, + uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, + uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, + uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space; + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store; + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan; + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table; + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table; + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table; +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +257,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..d997647f48 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..1245b9c8d8 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint32_t rsvd[2]; + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + uint32_t rsv[3]; + uint16_t rsv1; + uint16_t dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + uint32_t rsv[2]; + uint16_t vlan_tag; + uint8_t vlan_sel; + uint8_t vlan_mode; + uint16_t start_qid; + uint16_t dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..3d4becf07c --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + size = sizeof(indir_tbl->entry) / 4; + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = rte_cpu_to_be_32(temp[i]); + } + return HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) + indir_table[i] = *(indir_tbl + i); +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (2 preceding siblings ...) 2026-03-18 12:31 ` [V5 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-03-18 12:31 ` Feifei Wang 2026-03-18 12:32 ` [V5 5/7] net/hinic3: add rx " Feifei Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:31 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 212 +++++-- drivers/net/hinic3/hinic3_ethdev.h | 117 ++-- drivers/net/hinic3/hinic3_nic_io.c | 525 ++++++++---------- drivers/net/hinic3/hinic3_nic_io.h | 61 +- drivers/net/hinic3/hinic3_rx.h | 18 + drivers/net/hinic3/hinic3_tx.h | 8 + .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 24 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 12 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 24 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 12 +- 10 files changed, 577 insertions(+), 436 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 780b17414a..1010773ac1 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,10 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + cos_num_max = nic_dev->feature_cap & NIC_F_HTN_CMDQ ? + HINIC3_COS_NUM_MAX_HTN : HINIC3_COS_NUM_MAX; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -644,6 +649,15 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; +} + /** * Get information about the device. * @@ -684,6 +698,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + hinic3_dev_tnl_tso_support(info, nic_dev); info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; @@ -926,16 +942,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && nb_desc != nic_dev->rxqs[0]->q_depth) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -997,8 +1022,7 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1030,16 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1091,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1199,6 +1244,7 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1292,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1319,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,9 +1371,12 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1333,14 +1384,17 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,14 +1412,10 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); - return rc; - } rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, @@ -1373,6 +1423,15 @@ hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) dev->data->name, rq_id); return rc; } + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } + } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1447,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1464,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3286,6 +3347,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_ops->nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_integrated_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->rx_ops->nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->rx_ops->nic_rx_cqe_done = hinic3_rx_separate_cqe_done; + nic_dev->rx_ops->nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3333,6 +3412,27 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto alloc_eth_addr_fail; } + nic_dev->cmdq_ops = rte_zmalloc("cmdq_ops", sizeof(struct hinic3_nic_cmdq_ops), 0); + if (!nic_dev->cmdq_ops) { + PMD_DRV_LOG(ERR, "Allocate cmdq_ops memory failed"); + err = -ENOMEM; + goto alloc_cmdq_ops_fail; + } + + nic_dev->rx_ops = rte_zmalloc("rx_ops", sizeof(struct hinic3_nic_rx_ops), 0); + if (!nic_dev->rx_ops) { + PMD_DRV_LOG(ERR, "Allocate rx_ops memory failed"); + err = -ENOMEM; + goto alloc_rx_ops_fail; + } + + nic_dev->tx_ops = rte_zmalloc("tx_ops", sizeof(struct hinic3_nic_tx_ops), 0); + if (!nic_dev->tx_ops) { + PMD_DRV_LOG(ERR, "Allocate tx_ops memory failed"); + err = -ENOMEM; + goto alloc_tx_ops_fail; + } + nic_dev->mc_list = rte_zmalloc("hinic3_mc", HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0); if (!nic_dev->mc_list) { @@ -3387,9 +3487,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); + + hinic3_nic_tx_rx_ops_init(nic_dev); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { @@ -3479,6 +3581,18 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) nic_dev->mc_list = NULL; alloc_mc_list_fail: + rte_free(nic_dev->tx_ops); + nic_dev->tx_ops = NULL; + +alloc_tx_ops_fail: + rte_free(nic_dev->rx_ops); + nic_dev->rx_ops = NULL; + +alloc_rx_ops_fail: + rte_free(nic_dev->cmdq_ops); + nic_dev->cmdq_ops = NULL; + +alloc_cmdq_ops_fail: rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..3898edd076 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -14,44 +14,50 @@ #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_QINQ_PKT RTE_MBUF_F_TX_QINQ +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_GRE RTE_MBUF_F_TX_TUNNEL_GRE +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_TUNNEL_VXLAN_GPE RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE +#define HINIC3_PKT_TX_TUNNEL_GENEVE RTE_MBUF_F_TX_TUNNEL_GENEVE +#define HINIC3_PKT_TX_TUNNEL_IPIP RTE_MBUF_F_TX_TUNNEL_IPIP +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_UDP_CKSUM RTE_MBUF_F_TX_OUTER_UDP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,23 +74,34 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); @@ -133,6 +150,10 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_rx_ops *rx_ops; + struct hinic3_nic_tx_ops *tx_ops; + }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..9203dcce40 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,310 +11,194 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ - (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ + (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); +#define CQE_CTX_CI_ADDR_SHIFT 4 - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} - -/** - * Initialize context structure for specified TXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] sq - * Pointer to TXQ structure. - * @param[in] sq_id - * ID of TXQ being configured. - * @param[out] sq_ctxt - * Pointer to structure that will hold TXQ context. - */ -static void +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, struct hinic3_sq_ctxt *sq_ctxt) { @@ -386,22 +270,13 @@ hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt)); } -/** - * Initialize context structure for specified RXQ by configuring various queue - * parameters (e.g., ci, pi, work queue page addresses). - * - * @param[in] rq - * Pointer to RXQ structure. - * @param[out] rq_ctxt - * Pointer to structure that will hold RXQ context. - */ -static void +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) { uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqe_type; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +321,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +374,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +388,14 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +426,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -579,28 +440,14 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store(nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +480,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +491,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +537,62 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx = { 0 }; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} + int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +656,50 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index c8e690981b..d0acba4cf4 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -28,11 +28,6 @@ #define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) -#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) -#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ - + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -231,6 +226,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * @@ -268,7 +288,8 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); /** * Get cmdq ops hardware tile NIC(htn) supported. * - * @retval Pointer to ops. + * @return + * Pointer to ops. */ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); @@ -279,9 +300,6 @@ struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); * Pointer to ethernet device structure. * @param[out] s_feature * s_feature driver supported. - * - * @return - * 0 on success, non-zero on failure. */ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_feature); @@ -296,4 +314,29 @@ void hinic3_update_driver_feature(struct hinic3_nic_dev *nic_dev, uint64_t s_fea */ uint64_t hinic3_get_driver_feature(struct hinic3_nic_dev *nic_dev); +/** + * Initialize context structure for specified TXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] sq + * Pointer to TXQ structure. + * @param[in] sq_id + * ID of TXQ being configured. + * @param[out] sq_ctxt + * Pointer to structure that will hold TXQ context. + */ +void hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, uint16_t sq_id, + struct hinic3_sq_ctxt *sq_ctxt); + +/** + * Initialize context structure for specified RXQ by configuring various queue + * parameters (e.g., ci, pi, work queue page addresses). + * + * @param[in] rq + * Pointer to RXQ structure. + * @param[out] rq_ctxt + * Pointer to structure that will hold RXQ context. + */ +void hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt); + #endif /* _HINIC3_NIC_IO_H_ */ diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..7ae39e3e91 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -279,6 +279,24 @@ struct __rte_cache_aligned hinic3_rxq { #endif }; +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback function */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, + volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_rx_ops { + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + uint16_t hinic3_rx_fill_wqe(struct hinic3_rxq *rxq); uint16_t hinic3_rx_fill_buffers(struct hinic3_rxq *rxq); diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..21958a00cc 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -304,6 +304,14 @@ struct __rte_cache_aligned hinic3_txq { #endif }; +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +struct hinic3_nic_tx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; +}; + void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev); void hinic3_free_txq_mbufs(struct hinic3_txq *txq); void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c index d997647f48..634dfe7239 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_htn_cmdq.h" +#define HTN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define HTN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_htn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_htn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -27,7 +32,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_htn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id, uint16_t func_id) { @@ -45,7 +50,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_htn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t func_id; uint16_t i; @@ -65,9 +70,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = HTN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; } @@ -75,10 +80,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_htn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_htn_vlan_ctx); + vlan_ctx = (struct hinic3_htn_vlan_ctx *)cmd_buf->buf; vlan_ctx->dest_func_id = func_id; vlan_ctx->start_qid = q_id; @@ -87,7 +92,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_htn_vlan_ctx)); return HINIC3_HTN_CMD_SVLAN_MODIFY; } diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index 1245b9c8d8..ffafe39fb5 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -7,7 +7,7 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_htn_qp_ctxt_header { uint32_t rsvd[2]; uint16_t num_queues; uint16_t queue_type; @@ -15,12 +15,12 @@ struct hinic3_qp_ctxt_header { uint16_t dest_func_id; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_clean_queue_ctxt { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_htn_qp_ctxt_block { + struct hinic3_htn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; @@ -43,7 +43,7 @@ enum hinic3_htn_cmd { HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE }; -struct hinic3_vlan_ctx { +struct hinic3_htn_vlan_ctx { uint32_t rsv[2]; uint16_t vlan_tag; uint8_t vlan_sel; diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index 3d4becf07c..dfe8598f78 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -8,11 +8,16 @@ #include "hinic3_hwif.h" #include "hinic3_stn_cmdq.h" +#define STN_SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define STN_RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_stn_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, struct hinic3_cmd_buf *cmd_buf, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + struct hinic3_stn_clean_queue_ctxt *ctxt_block = NULL; ctxt_block = cmd_buf->buf; ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; @@ -26,7 +31,7 @@ static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_de return HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; } -static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, +static void qp_prepare_cmdq_header(struct hinic3_stn_qp_ctxt_header *qp_ctxt_hdr, enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, uint16_t q_id) { @@ -44,7 +49,7 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic enum hinic3_qp_ctxt_type ctxt_type, uint16_t start_qid, uint16_t max_ctxts) { - struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + struct hinic3_stn_qp_ctxt_block *qp_ctxt_block = NULL; uint16_t i; qp_ctxt_block = cmd_buf->buf; @@ -62,9 +67,9 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic } if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_RQ_CTXT_SIZE(max_ctxts); else - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd_buf->size = STN_SQ_CTXT_SIZE(max_ctxts); return HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; } @@ -72,10 +77,10 @@ static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) { - struct hinic3_vlan_ctx *vlan_ctx = NULL; + struct hinic3_stn_vlan_ctx *vlan_ctx = NULL; - cmd_buf->size = sizeof(struct hinic3_vlan_ctx); - vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + cmd_buf->size = sizeof(struct hinic3_stn_vlan_ctx); + vlan_ctx = (struct hinic3_stn_vlan_ctx *)cmd_buf->buf; vlan_ctx->func_id = func_id; vlan_ctx->qid = q_id; @@ -84,7 +89,8 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint vlan_ctx->vlan_mode = vlan_mode; rte_atomic_thread_fence(rte_memory_order_seq_cst); - hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_stn_vlan_ctx)); return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index f8d26e9397..a40c4faa89 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -7,27 +7,27 @@ #include "hinic3_nic_io.h" -struct hinic3_qp_ctxt_header { +struct hinic3_stn_qp_ctxt_header { uint16_t num_queues; uint16_t queue_type; uint16_t start_qid; uint16_t rsvd; }; -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_clean_queue_ctxt { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; uint32_t rsvd; }; -struct hinic3_qp_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; +struct hinic3_stn_qp_ctxt_block { + struct hinic3_stn_qp_ctxt_header cmdq_hdr; union { struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; }; }; -struct hinic3_vlan_ctx { +struct hinic3_stn_vlan_ctx { uint32_t func_id; uint32_t qid; /* if qid = 0xFFFF, config for all queues */ uint32_t vlan_id; -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 5/7] net/hinic3: add rx ops to support Compact CQE 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 preceding siblings ...) 2026-03-18 12:31 ` [V5 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-03-18 12:32 ` Feifei Wang 2026-03-18 12:32 ` [V5 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 12:32 ` [V5 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:32 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to separate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_rx.c | 241 ++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 164 +++++++++++++++++++- 3 files changed, 342 insertions(+), 66 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 3898edd076..9061e2b217 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -121,8 +121,7 @@ struct hinic3_nic_dev { uint16_t mtu_size; uint16_t rss_state; - uint8_t num_rss; /**< Number of RSS queues. */ - uint8_t rsvd0; /**< Reserved field 0. */ + uint16_t num_rss; /**< Number of RSS queues. */ uint32_t rx_mode; uint8_t rx_queue_list[HINIC3_MAX_QUEUE_NUM]; diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..af684e77ba 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -355,7 +360,7 @@ hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev, void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1)); @@ -372,7 +377,7 @@ hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev) static void hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, uint32_t *indir_tbl) { - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; int i = 0; int j; @@ -522,7 +527,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, uint16_t queue_id) { uint8_t queue_pos; - uint8_t rss_queue_count = nic_dev->num_rss; + uint16_t rss_queue_count = nic_dev->num_rss; queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list, rss_queue_count, queue_id); @@ -534,8 +539,7 @@ hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev, rss_queue_count--; memmove(nic_dev->rx_queue_list + queue_pos, nic_dev->rx_queue_list + queue_pos + 1, - (rss_queue_count - queue_pos) * - sizeof(nic_dev->rx_queue_list[0])); + (rss_queue_count - queue_pos) * sizeof(nic_dev->rx_queue_list[0])); } RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list)); @@ -618,6 +622,33 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + uint32_t val; + + sw_ci = hinic3_get_rq_local_ci(rxq); + val = rte_read32(&rxq->rq_ci->dw1.value); + rq_ci.dw1.value = hinic3_hw_cpu32(val); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +732,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->rx_ops->nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +758,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +781,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +815,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +828,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -832,11 +870,9 @@ hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) } static inline uint64_t -hinic3_rx_rss_hash(uint32_t offload_type, uint32_t rss_hash_value, uint32_t *rss_hash) +hinic3_rx_rss_hash(uint32_t rss_type, uint32_t rss_hash_value, uint32_t *rss_hash) { - uint32_t rss_type; - rss_type = HINIC3_GET_RSS_TYPES(offload_type); if (likely(rss_type != 0)) { *rss_hash = rss_hash_value; return HINIC3_PKT_RX_RSS_HASH; @@ -931,18 +967,117 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + volatile struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = (volatile struct hinic3_rq_cqe *)rte_mbuf_data_addr_default(rxm); + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = rte_be_to_cpu_32(rx_cqe->status); + dw1 = rte_be_to_cpu_32(rx_cqe->vlan_len); + dw2 = rte_be_to_cpu_32(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1088,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->rx_ops->nic_rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->rx_ops->nic_rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1119,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1128,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1159,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 7ae39e3e91..2655802467 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -5,15 +5,13 @@ #ifndef _HINIC3_RX_H_ #define _HINIC3_RX_H_ -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_SHIFT 0 +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFOLAD_TYPE_PTYPE_OFFLOAD_MASK 0xFFFU +#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU #define DPI_EXT_ACTION_FILED (1ULL << 32) @@ -21,6 +19,9 @@ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define HINIC3_GET_RX_PTYPE_OFFLOAD(offload_type) \ + RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PTYPE_OFFLOAD) + #define HINIC3_GET_RX_PKT_TYPE(offload_type) \ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) @@ -122,6 +123,54 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & \ + RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -195,6 +244,25 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + + uint8_t cqe_type; + uint8_t ts_flag; + uint16_t csum_err; + + uint16_t vlan_tag; + uint16_t ptype; + + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +288,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + uint32_t value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +350,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -308,6 +399,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -369,4 +461,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +hinic3_rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 6/7] net/hinic3: add tx ops to support Compact CQE 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (4 preceding siblings ...) 2026-03-18 12:32 ` [V5 5/7] net/hinic3: add rx " Feifei Wang @ 2026-03-18 12:32 ` Feifei Wang 2026-03-18 12:32 ` [V5 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:32 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_tx.c | 454 +++++++++++++++++---------------- drivers/net/hinic3/hinic3_tx.h | 146 +++++++++-- 2 files changed, 361 insertions(+), 239 deletions(-) diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..fca94dd08e 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_sq_get_wqebbs(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t *prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) +{ + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } +} + +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + wqe_combo->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; - - task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint32_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf_cnt = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; + uint32_t *qsf = &wqe_desc->queue_info; - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; - } - - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); + wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); } /** @@ -861,9 +880,7 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_tx_info *tx_info = NULL; struct rte_mbuf *mbuf_pkt = NULL; struct hinic3_sq_wqe_combo wqe_combo = {0}; - struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +902,28 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extented wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || + wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else { + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +936,16 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + /* Task or bd section maybe warpped for one wqe. */ + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,12 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index 21958a00cc..e0ed9908ad 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; -enum sq_wqe_data_format { +/* Tx queue ctrl info */ +enum sq_wqe_type { SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + +enum sq_wqe_data_format { + SQ_WQE_SGL = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((uint32_t)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -319,4 +397,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [V5 7/7] net/hinic3: use different callback func to support htn fdir 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (5 preceding siblings ...) 2026-03-18 12:32 ` [V5 6/7] net/hinic3: add tx " Feifei Wang @ 2026-03-18 12:32 ` Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-03-18 12:32 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 55 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 19 +- drivers/net/hinic3/hinic3_ethdev.c | 41 +- drivers/net/hinic3/hinic3_fdir.c | 657 +++++++++++++----- drivers/net/hinic3/hinic3_fdir.h | 361 ++++++++-- drivers/net/hinic3/hinic3_nic_io.h | 16 - drivers/net/hinic3/hinic3_rx.c | 26 +- drivers/net/hinic3/hinic3_tx.c | 16 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 8 + drivers/net/hinic3/meson.build | 8 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 2 +- .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 8 + 12 files changed, 877 insertions(+), 340 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 22caac0457..5387626b98 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -970,7 +970,7 @@ hinic3_set_vlan_filter(struct hinic3_hwdev *hwdev, uint32_t vlan_filter_ctrl) static int hinic3_set_rx_lro(struct hinic3_hwdev *hwdev, uint8_t ipv4_en, - uint8_t ipv6_en, uint8_t lro_max_pkt_len) + uint8_t ipv6_en, uint8_t lro_max_pkt_len) { struct hinic3_cmd_lro_config lro_cfg = {0}; uint16_t out_size = sizeof(lro_cfg); @@ -1029,7 +1029,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1468,54 +1468,6 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ -int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) -{ - struct hinic3_set_fdir_ethertype_rule ethertype_cmd; - uint16_t out_size = sizeof(ethertype_cmd); - int err; - - if (!hwdev) - return -EINVAL; - - memset(ðertype_cmd, 0, - sizeof(struct hinic3_set_fdir_ethertype_rule)); - ethertype_cmd.func_id = hinic3_global_func_id(hwdev); - ethertype_cmd.pkt_type = pkt_type; - ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; - - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, - HINIC3_NIC_CMD_SET_FDIR_STATUS, - ðertype_cmd, sizeof(ethertype_cmd), - ðertype_cmd, &out_size); - if (err || ethertype_cmd.head.status || !out_size) { - PMD_DRV_LOG(ERR, - "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", - err, ethertype_cmd.head.status, out_size, - ethertype_cmd.func_id); - return -EIO; - } - - return 0; -} - int hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule, uint8_t tcam_rule_type) @@ -1543,8 +1495,7 @@ hinic3_add_tcam_rule(struct hinic3_hwdev *hwdev, struct hinic3_tcam_cfg_rule *tc &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd, &out_size); if (err || tcam_cmd.msg_head.status || !out_size) { - PMD_DRV_LOG(ERR, - "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", + PMD_DRV_LOG(ERR, "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x", err, tcam_cmd.msg_head.status, out_size); return -EIO; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 06d5bc7d1b..6d3eb433bd 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1203,7 +1203,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1522,8 +1522,21 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index 1010773ac1..adeae07f27 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -975,8 +975,8 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, "RX queue depth is out of range from %d to %d", HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH); PMD_DRV_LOG(ERR, - "nb_desc: %d, q_depth: %d, port: %d queue: %d", - nb_desc, rq_depth, dev->data->port_id, qid); + "nb_desc: %d, q_depth: %d, port: %d queue: %d", + nb_desc, rq_depth, dev->data->port_id, qid); return -EINVAL; } @@ -2158,8 +2158,7 @@ hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } /* Update max frame size. */ - HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = - HINIC3_MTU_TO_PKTLEN(mtu); + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) = HINIC3_MTU_TO_PKTLEN(mtu); nic_dev->mtu_size = mtu; return err; } @@ -2357,6 +2356,12 @@ hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev) uint32_t rx_mode; int err; + if (!(nic_dev->feature_cap & NIC_F_PROMISC)) { + PMD_DRV_LOG(ERR, "nic_dev: %s, port_id: %d, do not support vf promisc: %" PRIu64 "", + nic_dev->dev_name, dev->data->port_id, nic_dev->feature_cap); + return -ENOTSUP; + } + rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC; err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode); @@ -2527,20 +2532,22 @@ hinic3_rss_hash_update(struct rte_eth_dev *dev, } rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) - ? 1 - : 0; + RTE_ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) - ? 1 - : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + RTE_ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.tcp_ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); if (err) PMD_DRV_LOG(ERR, "Set RSS type failed"); @@ -2597,12 +2604,16 @@ hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) rss_conf->rss_hf |= rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) : 0; - rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0; - rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0; rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0; - + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0; + rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0; + } else { + rss_conf->rss_hf |= 0; + rss_conf->rss_hf |= 0; + } return 0; } diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..37a4f0cf52 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -2,15 +2,15 @@ * Copyright(c) 2025 Huawei Technologies Co., Ltd */ +#include "base/hinic3_cmd.h" #include "base/hinic3_compat.h" #include "base/hinic3_hwdev.h" #include "base/hinic3_hwif.h" #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" +#include "hinic3_nic_io.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +77,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +101,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +136,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +175,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +223,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +246,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +277,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +315,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +326,14 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +345,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +407,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +440,259 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = + hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +727,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH(it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if (tcam_index == + (it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id))) + return it; } } @@ -588,25 +813,18 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result * Pointer to the TCAM dynamic block. If the search fails, NULL is returned. */ static struct hinic3_tcam_dynamic_block * -hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, +hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { - struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); uint16_t block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt; struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; struct hinic3_tcam_dynamic_block *tmp = NULL; @@ -616,6 +834,8 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,107 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +static void +hinic3_tcam_index_free(struct hinic3_nic_dev *nic_dev, uint16_t index, uint16_t block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt == 0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + +static uint16_t +hinic3_tcam_alloc_index(void *dev, uint16_t *block_id) +{ + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev; + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +static int +hinic3_set_fdir_ethertype_filter(void *hwdev, uint8_t pkt_type, void *filter, uint8_t en) +{ + struct hinic3_set_fdir_ethertype_rule ethertype_cmd; + struct hinic3_ethertype_filter *ethertype_filter = (struct hinic3_ethertype_filter *)filter; + uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; + int err; + + if (!hwdev) + return -EINVAL; + struct hinic3_nic_dev *nic_dev = + (struct hinic3_nic_dev *)((struct hinic3_hwdev *)hwdev)->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) + return -ENOMEM; + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } + } + + memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); + ethertype_cmd.func_id = hinic3_global_func_id(hwdev); + ethertype_cmd.pkt_type = pkt_type; + ethertype_cmd.pkt_type_en = en; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; + + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, + HINIC3_NIC_CMD_SET_FDIR_STATUS, + ðertype_cmd, sizeof(ethertype_cmd), + ðertype_cmd, &out_size); + if (err || ethertype_cmd.head.status || !out_size) { + PMD_DRV_LOG(ERR, + "set fdir ethertype rule failed, err: %d, status: 0x%x, out size: 0x%x, func_id %d", + err, ethertype_cmd.head.status, out_size, ethertype_cmd.func_id); + return -EIO; + } + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) { + if (en == 0) { + hinic3_tcam_index_free(nic_dev, HINIC3_TCAM_GET_INDEX_IN_BLOCK(index), + HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index)); + } else { + ethertype_filter->tcam_index[pkt_type] = index; + } + } + + return 0; +} + /** * Add a TCAM filter. * @@ -722,11 +1035,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,39 +1046,14 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ - err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, - TCAM_RULE_FDIR_TYPE); + err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, TCAM_RULE_FDIR_TYPE); if (err) { PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!"); goto add_tcam_rules_failed; @@ -785,10 +1069,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1076,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -806,14 +1086,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, TCAM_RULE_FDIR_TYPE); add_tcam_rules_failed: -lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -850,8 +1123,7 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, } if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) { - PMD_DRV_LOG(ERR, - "Fdir filter del dynamic lookup for block failed!"); + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); return -EINVAL; } /* Calculate TCAM index. */ @@ -873,14 +1145,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1197,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1208,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1237,13 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, + fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1367,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1376,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1386,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1396,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1408,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1429,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1439,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1451,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1467,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1477,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1489,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1524,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1534,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1549,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1588,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..277d89d4fd 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,6 +14,30 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; @@ -30,6 +54,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +68,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +104,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -77,11 +119,13 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; @@ -89,9 +133,10 @@ struct hinic3_tcam_key_mem { uint32_t rsvd6 : 16; uint32_t outer_sipv4_h : 16; - uint32_t outer_sipv4_l : 16; + uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; + uint32_t outer_dipv4_l : 16; uint32_t vni_h : 16; @@ -110,13 +155,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +181,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +297,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +338,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +442,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +478,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +496,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +587,26 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + /** Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,33 +650,24 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index d0acba4cf4..e1741d1156 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -277,22 +277,6 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); -/** - * Get cmdq ops software tile NIC(stn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_stn_ops(void); - -/** - * Get cmdq ops hardware tile NIC(htn) supported. - * - * @return - * Pointer to ops. - */ -struct hinic3_nic_cmdq_ops *hinic3_cmdq_get_htn_ops(void); - /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index af684e77ba..bf5bf66d2a 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -22,8 +22,7 @@ * Current pi. */ static inline void -hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, - uint16_t *pi) +hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe, uint16_t *pi) { *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx); @@ -84,8 +83,7 @@ hinic3_rx_fill_wqe(struct hinic3_rxq *rxq) if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { /* Unit of cqe length is 16B. */ - hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, - cqe_dma, + hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge, cqe_dma, HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT); /* Use fixed len. */ rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; @@ -436,12 +434,18 @@ hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev, rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0; rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0; - rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; - rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) { + rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + } else { + rss_type.ipv6_ext = 0; + rss_type.ipv6_ext = 0; + } + err = hinic3_set_rss_type(nic_dev->hwdev, rss_type); return err; } @@ -488,8 +492,7 @@ hinic3_update_rss_config(struct rte_eth_dev *dev, goto init_rss_fail; } - err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, - prio_tc); + err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc, prio_tc); if (err) { PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err); goto init_rss_fail; @@ -797,7 +800,7 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) } } - hinic3_rearm_rxq_mbuf(rxq); + (void)hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); if (err) @@ -813,7 +816,6 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return err; } - static inline uint64_t hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { @@ -1017,8 +1019,8 @@ hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq __rte_unused, volatile struct hini uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); - cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); - cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index fca94dd08e..1a864d0775 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -393,7 +393,7 @@ static int hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_wqe_info *wqe_info) + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; struct hinic3_offload_info *offload_info = &wqe_info->offload_info; @@ -409,7 +409,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + wqe_info->queue_info.payload_offset = wqe_info->payload_offset >> 1; if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; @@ -457,7 +457,7 @@ hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, offload_info->out_l4_en = 1; set_tx_wqe_offload: - nic_dev->tx_ops->tx_set_wqe_offload(wqe_info, wqe_combo); + nic_dev->tx_ops->nic_tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -627,9 +627,8 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, return err; /* Non-tso mbuf only check sge num. */ - if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { + if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); - } /* Tso mbuf. */ wqe_info->payload_offset = @@ -647,8 +646,7 @@ hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, } static inline void -hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, - uint32_t len) +hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, uint32_t len) { buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); @@ -832,14 +830,14 @@ hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT); *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | - SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset, PLDOFF) | SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h index ffafe39fb5..73f4922734 100644 --- a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -52,4 +52,12 @@ struct hinic3_htn_vlan_ctx { uint16_t dest_func_id; }; +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + #endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build index b79b753716..b286cdb79c 100644 --- a/drivers/net/hinic3/meson.build +++ b/drivers/net/hinic3/meson.build @@ -16,8 +16,6 @@ endif cflags += ['-DHW_CONVERT_ENDIAN'] -subdir('base') - sources = files( 'hinic3_ethdev.c', 'hinic3_nic_io.c', @@ -28,3 +26,9 @@ sources = files( ) includes += include_directories('base') +includes += include_directories('stn_adapt') +includes += include_directories('stn_adapt') + +subdir('base') +subdir('htn_adapt') +subdir('stn_adapt') \ No newline at end of file diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c index dfe8598f78..f41f060d17 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -94,7 +94,7 @@ static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, uint return HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; } -static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev __rte_unused, const uint32_t *indir_table, struct hinic3_cmd_buf *cmd_buf) { diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h index a40c4faa89..f1720c29c7 100644 --- a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -35,4 +35,12 @@ struct hinic3_stn_vlan_ctx { uint32_t vlan_sel; }; +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + #endif /* _HINIC3_STN_CMDQ_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang @ 2026-01-31 10:05 ` Feifei Wang 2026-01-31 10:05 ` [PATCH 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:05 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> Add enhance command queue for new SPx series NIC New SPx series NIC uses enhance command queue to send messages to hardware NIC, which is different from previous SPx NIC's common command queue. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_cmd.h | 145 ++++--- drivers/net/hinic3/base/hinic3_cmdq.c | 400 +++++++----------- drivers/net/hinic3/base/hinic3_cmdq.h | 65 ++- drivers/net/hinic3/base/hinic3_cmdq_enhance.c | 110 +++++ drivers/net/hinic3/base/hinic3_cmdq_enhance.h | 169 ++++++++ drivers/net/hinic3/base/hinic3_hw_comm.c | 15 +- drivers/net/hinic3/base/hinic3_hw_comm.h | 31 +- drivers/net/hinic3/base/hinic3_hwdev.c | 13 +- drivers/net/hinic3/base/hinic3_hwdev.h | 18 + drivers/net/hinic3/base/hinic3_mgmt.c | 5 +- drivers/net/hinic3/base/hinic3_mgmt.h | 2 + drivers/net/hinic3/base/hinic3_nic_cfg.c | 74 ++-- drivers/net/hinic3/base/meson.build | 1 + 13 files changed, 667 insertions(+), 381 deletions(-) create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.c create mode 100644 drivers/net/hinic3/base/hinic3_cmdq_enhance.h diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h index 6042ca51bd..b73faca070 100644 --- a/drivers/net/hinic3/base/hinic3_cmd.h +++ b/drivers/net/hinic3/base/hinic3_cmd.h @@ -8,13 +8,13 @@ #define NIC_RSS_CMD_TEMP_ALLOC 0x01 #define NIC_RSS_CMD_TEMP_FREE 0x02 -#define HINIC3_RSS_TYPE_VALID_SHIFT 23 +#define HINIC3_RSS_TYPE_VALID_SHIFT 23 #define HINIC3_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24 #define HINIC3_RSS_TYPE_IPV6_EXT_SHIFT 25 #define HINIC3_RSS_TYPE_TCP_IPV6_SHIFT 26 -#define HINIC3_RSS_TYPE_IPV6_SHIFT 27 +#define HINIC3_RSS_TYPE_IPV6_SHIFT 27 #define HINIC3_RSS_TYPE_TCP_IPV4_SHIFT 28 -#define HINIC3_RSS_TYPE_IPV4_SHIFT 29 +#define HINIC3_RSS_TYPE_IPV4_SHIFT 29 #define HINIC3_RSS_TYPE_UDP_IPV6_SHIFT 30 #define HINIC3_RSS_TYPE_UDP_IPV4_SHIFT 31 #define HINIC3_RSS_TYPE_SET(val, member) \ @@ -23,113 +23,126 @@ #define HINIC3_RSS_TYPE_GET(val, member) \ (((uint32_t)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1) +#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) + /* NIC CMDQ MODE. */ enum hinic3_ucode_cmd { - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, + HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0, HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT = 1, HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE = 4, HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE = 5, HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE = 6, - HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, + HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10, }; /* Commands between NIC to MPU. */ enum hinic3_nic_cmd { /* Only for PFD and VFD. */ - HINIC3_NIC_CMD_VF_REGISTER = 0, + HINIC3_NIC_CMD_VF_REGISTER = 0, /* FUNC CFG */ - HINIC3_NIC_CMD_SET_FUNC_TBL = 5, - HINIC3_NIC_CMD_SET_VPORT_ENABLE = 6, - HINIC3_NIC_CMD_SET_RX_MODE = 7, - HINIC3_NIC_CMD_SQ_CI_ATTR_SET = 8, - HINIC3_NIC_CMD_GET_VPORT_STAT = 9, - HINIC3_NIC_CMD_CLEAN_VPORT_STAT = 10, - HINIC3_NIC_CMD_CLEAR_QP_RESOURCE = 11, - HINIC3_NIC_CMD_CFG_FLEX_QUEUE = 12, + HINIC3_NIC_CMD_SET_FUNC_TBL = 5, + HINIC3_NIC_CMD_SET_VPORT_ENABLE = 6, + HINIC3_NIC_CMD_SET_RX_MODE = 7, + HINIC3_NIC_CMD_SQ_CI_ATTR_SET = 8, + HINIC3_NIC_CMD_GET_VPORT_STAT = 9, + HINIC3_NIC_CMD_CLEAN_VPORT_STAT = 10, + HINIC3_NIC_CMD_CLEAR_QP_RESOURCE = 11, + HINIC3_NIC_CMD_CFG_FLEX_QUEUE = 12, /* LRO CFG */ - HINIC3_NIC_CMD_CFG_RX_LRO = 13, - HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, - HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_RX_LRO = 13, + HINIC3_NIC_CMD_CFG_LRO_TIMER = 14, + HINIC3_NIC_CMD_FEATURE_NEGO = 15, + HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE = 16, + + HINIC3_NIC_CMD_CACHE_OUT_QP_RES = 17, + HINIC3_NIC_CMD_SET_RQ_CI_CTX = 18, + HINIC3_NIC_CMD_SET_RQ_ENABLE = 19, + /* MAC & VLAN CFG */ - HINIC3_NIC_CMD_GET_MAC = 20, - HINIC3_NIC_CMD_SET_MAC = 21, - HINIC3_NIC_CMD_DEL_MAC = 22, - HINIC3_NIC_CMD_UPDATE_MAC = 23, - HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, - HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, - HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + HINIC3_NIC_CMD_GET_MAC = 20, + HINIC3_NIC_CMD_SET_MAC = 21, + HINIC3_NIC_CMD_DEL_MAC = 22, + HINIC3_NIC_CMD_UPDATE_MAC = 23, + HINIC3_NIC_CMD_CFG_FUNC_VLAN = 25, + HINIC3_NIC_CMD_SET_VLAN_FILTER_EN = 26, + HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD = 27, + + HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN = 34, + HINIC3_NIC_CMD_SET_RQ_ENABLE_HTN = 35, + /* RSS CFG */ - HINIC3_NIC_CMD_RSS_CFG = 60, - HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, - HINIC3_NIC_CMD_GET_RSS_CTX_TBL = 62, - HINIC3_NIC_CMD_CFG_RSS_HASH_KEY = 63, - HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE = 64, + HINIC3_NIC_CMD_RSS_CFG = 60, + HINIC3_NIC_CMD_RSS_TEMP_MGR = 61, + HINIC3_NIC_CMD_GET_RSS_CTX_TBL = 62, + HINIC3_NIC_CMD_CFG_RSS_HASH_KEY = 63, + HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE = 64, HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC = 65, /* FDIR */ - HINIC3_NIC_CMD_ADD_TC_FLOW = 80, - HINIC3_NIC_CMD_DEL_TC_FLOW = 81, - HINIC3_NIC_CMD_FLUSH_TCAM = 83, - HINIC3_NIC_CMD_CFG_TCAM_BLOCK = 84, - HINIC3_NIC_CMD_ENABLE_TCAM = 85, + HINIC3_NIC_CMD_ADD_TC_FLOW = 80, + HINIC3_NIC_CMD_DEL_TC_FLOW = 81, + HINIC3_NIC_CMD_FLUSH_TCAM = 83, + HINIC3_NIC_CMD_CFG_TCAM_BLOCK = 84, + HINIC3_NIC_CMD_ENABLE_TCAM = 85, - HINIC3_NIC_CMD_SET_FDIR_STATUS = 91, + HINIC3_NIC_CMD_SET_FDIR_STATUS = 91, /* PORT CFG */ - HINIC3_NIC_CMD_CFG_PAUSE_INFO = 101, - HINIC3_NIC_CMD_VF_COS = 104, + HINIC3_NIC_CMD_CFG_PAUSE_INFO = 101, + HINIC3_NIC_CMD_VF_COS = 104, }; /* COMM commands between driver to MPU. */ enum hinic3_mgmt_cmd { - HINIC3_MGMT_CMD_FUNC_RESET = 0, + HINIC3_MGMT_CMD_FUNC_RESET = 0, HINIC3_MGMT_CMD_FEATURE_NEGO = 1, - HINIC3_MGMT_CMD_SET_FUNC_SVC_USED_STATE = 7, + HINIC3_MGMT_CMD_SET_FUNC_SVC_USED_STATE = 7, HINIC3_MGMT_CMD_SET_CMDQ_CTXT = 20, - HINIC3_MGMT_CMD_SET_VAT = 21, + HINIC3_MGMT_CMD_SET_VAT = 21, HINIC3_MGMT_CMD_CFG_PAGESIZE = 22, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG = 23, HINIC3_MGMT_CMD_SET_DMA_ATTR = 25, + HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT = 39, HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40, HINIC3_MGMT_CMD_GET_FW_VERSION = 60, HINIC3_MGMT_CMD_GET_BOARD_INFO = 61, HINIC3_MGMT_CMD_FAULT_REPORT = 100, - HINIC3_MGMT_CMD_FFM_SET = 103, + HINIC3_MGMT_CMD_FFM_SET = 103, }; enum mag_cmd { - SERDES_CMD_PROCESS = 0, + SERDES_CMD_PROCESS = 0, - MAG_CMD_SET_PORT_CFG = 1, - MAG_CMD_SET_PORT_ADAPT = 2, - MAG_CMD_CFG_LOOPBACK_MODE = 3, + MAG_CMD_SET_PORT_CFG = 1, + MAG_CMD_SET_PORT_ADAPT = 2, + MAG_CMD_CFG_LOOPBACK_MODE = 3, - MAG_CMD_GET_PORT_ENABLE = 5, - MAG_CMD_SET_PORT_ENABLE = 6, - MAG_CMD_GET_LINK_STATUS = 7, - MAG_CMD_SET_LINK_FOLLOW = 8, - MAG_CMD_SET_PMA_ENABLE = 9, - MAG_CMD_CFG_FEC_MODE = 10, + MAG_CMD_GET_PORT_ENABLE = 5, + MAG_CMD_SET_PORT_ENABLE = 6, + MAG_CMD_GET_LINK_STATUS = 7, + MAG_CMD_SET_LINK_FOLLOW = 8, + MAG_CMD_SET_PMA_ENABLE = 9, + MAG_CMD_CFG_FEC_MODE = 10, /* PHY */ - MAG_CMD_GET_XSFP_INFO = 60, - MAG_CMD_SET_XSFP_ENABLE = 61, - MAG_CMD_GET_XSFP_PRESENT = 62, + MAG_CMD_GET_XSFP_INFO = 60, + MAG_CMD_SET_XSFP_ENABLE = 61, + MAG_CMD_GET_XSFP_PRESENT = 62, /* sfp/qsfp single byte read/write, for equipment test. */ - MAG_CMD_SET_XSFP_RW = 63, - MAG_CMD_CFG_XSFP_TEMPERATURE = 64, + MAG_CMD_SET_XSFP_RW = 63, + MAG_CMD_CFG_XSFP_TEMPERATURE = 64, - MAG_CMD_WIRE_EVENT = 100, - MAG_CMD_LINK_ERR_EVENT = 101, + MAG_CMD_WIRE_EVENT = 100, + MAG_CMD_LINK_ERR_EVENT = 101, - MAG_CMD_EVENT_PORT_INFO = 150, - MAG_CMD_GET_PORT_STAT = 151, - MAG_CMD_CLR_PORT_STAT = 152, - MAG_CMD_GET_PORT_INFO = 153, - MAG_CMD_GET_PCS_ERR_CNT = 154, - MAG_CMD_GET_MAG_CNT = 155, - MAG_CMD_DUMP_ANTRAIN_INFO = 156, + MAG_CMD_EVENT_PORT_INFO = 150, + MAG_CMD_GET_PORT_STAT = 151, + MAG_CMD_CLR_PORT_STAT = 152, + MAG_CMD_GET_PORT_INFO = 153, + MAG_CMD_GET_PCS_ERR_CNT = 154, + MAG_CMD_GET_MAG_CNT = 155, + MAG_CMD_DUMP_ANTRAIN_INFO = 156, - MAG_CMD_MAX = 0xFF + MAG_CMD_MAX = 0xFF }; #endif /* _HINIC3_CMD_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c index e2b30ff94e..faf5fd6a54 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.c +++ b/drivers/net/hinic3/base/hinic3_cmdq.c @@ -5,6 +5,7 @@ #include "hinic3_compat.h" #include "hinic3_cmd.h" #include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" #include "hinic3_hwdev.h" #include "hinic3_hwif.h" #include "hinic3_mgmt.h" @@ -125,17 +126,17 @@ #define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi)) -#define CMDQ_PFN(addr, page_size) ((addr) >> (rte_log2_u32(page_size))) - #define FIRST_DATA_TO_WRITE_LAST sizeof(uint64_t) -#define WQE_LCMD_SIZE 64 -#define WQE_SCMD_SIZE 64 +#define WQE_LCMDQ_SIZE 64 +#define WQE_SCMDQ_SIZE 64 +#define WQE_ENHANCE_CMDQ_SIZE 32 #define COMPLETE_LEN 3 #define CMDQ_WQEBB_SIZE 64 #define CMDQ_WQEBB_SHIFT 6 +#define CMDQ_ENHANCE_WQEBB_SHIFT 4 #define CMDQ_WQE_SIZE 64 @@ -203,43 +204,6 @@ hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf) rte_free(cmd_buf); } -static uint32_t -cmdq_wqe_size(enum cmdq_wqe_type wqe_type) -{ - uint32_t wqe_size = 0; - - switch (wqe_type) { - case WQE_LCMD_TYPE: - wqe_size = WQE_LCMD_SIZE; - break; - case WQE_SCMD_TYPE: - wqe_size = WQE_SCMD_SIZE; - break; - } - - return wqe_size; -} - -static uint32_t -cmdq_get_wqe_size(enum bufdesc_len len) -{ - uint32_t wqe_size = 0; - - switch (len) { - case BUFDESC_LCMD_LEN: - wqe_size = WQE_LCMD_SIZE; - break; - case BUFDESC_SCMD_LEN: - wqe_size = WQE_SCMD_SIZE; - break; - default: - PMD_DRV_LOG(ERR, "Invalid bufdesc_len"); - break; - } - - return wqe_size; -} - static void cmdq_set_completion(struct hinic3_cmdq_completion *complete, struct hinic3_cmd_buf *buf_out) @@ -274,11 +238,11 @@ cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type, } static void -cmdq_wqe_fill(void *dst, void *src) +cmdq_wqe_fill(void *dst, void *src, int wqe_size) { memcpy((void *)((uint8_t *)dst + FIRST_DATA_TO_WRITE_LAST), (void *)((uint8_t *)src + FIRST_DATA_TO_WRITE_LAST), - CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST); + wqe_size - FIRST_DATA_TO_WRITE_LAST); /* The first 8 bytes should be written last. */ rte_atomic_thread_fence(rte_memory_order_release); @@ -369,190 +333,100 @@ cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in); } -/** - * Prepare necessary context for command queue, send a synchronous command with - * a direct response to hardware. It waits for completion of command by polling - * command queue for a response. - * - * @param[in] cmdq - * The command queue object that represents the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the command parameters. - * @param[out] out_param - * A pointer to the location where the response data will be stored, if - * available. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - uint64_t *out_param, uint32_t timeout) +static void cmdq_sync_wqe_prepare(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + struct hinic3_cmdq_wqe *curr_wqe, u16 curr_pi, + enum hinic3_cmdq_cmd_type nic_cmd_type) { struct hinic3_cmdq_wqe wqe; - struct hinic3_wq *wq = cmdq->wq; - struct hinic3_cmdq_wqe *curr_wqe = NULL; - struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; - - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + int wrapped, wqe_size; + enum cmdq_cmd_type cmd_type; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ - rte_spinlock_lock(&cmdq->cmdq_lock); + wqe_size = cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ ? WQE_LCMD_SIZE : WQE_ENHANCED_CMDQ_SIZE; - curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } + memset(&wqe, 0, (u32)wqe_size); - memset(&wqe, 0, sizeof(wqe)); wrapped = cmdq->wrapped; - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } - - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped, - mod, cmd, curr_prod_idx); - - - hinic3_cpu_to_hw(&wqe, wqe_size); - - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); - - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP; - - cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ - - if (out_param) { - wqe_lcmd = &curr_wqe->wqe_lcmd; - *out_param = rte_cpu_to_be_64(wqe_lcmd->completion.direct_resp); - } - - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; + cmd_type = (nic_cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP) ? + SYNC_CMD_DIRECT_RESP : SYNC_CMD_SGE_RESP; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_set_lcmd_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd, curr_pi); + else + enhance_cmdq_set_wqe(&wqe, cmd_type, buf_in, buf_out, wrapped, mod, cmd); -cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); + /* The data written to HW should be in Big Endian Format */ + hinic3_hw_be32_len(&wqe, wqe_size); - return err; + cmdq_wqe_fill(curr_wqe, &wqe, wqe_size); } -/** - * Send a synchronous command with detailed response and wait for the - * completion. - * - * @param[in] cmdq - * The command queue object representing the queue to send the command to. - * @param[in] mod - * The module type that the command belongs to. - * @param[in] cmd - * The command to be executed. - * @param[in] buf_in - * The input buffer containing the parameters for the command. - * @param[out] buf_out - * The output buffer where the detailed response from the hardware will be - * stored. - * @param[in] timeout - * The timeout value (ms) to wait for the command completion. If zero, a default - * timeout will be used. - * - * @return - * 0 on success, non-zero on failure. - * - -EBUSY: The command queue is busy. - * - -ETIMEDOUT: The command did not complete within the specified timeout. - */ -static int -cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, - uint8_t cmd, struct hinic3_cmd_buf *buf_in, - struct hinic3_cmd_buf *buf_out, uint32_t timeout) +#define NUM_WQEBBS_FOR_CMDQ_WQE 1 +#define NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE 2 + +static int cmdq_sync_cmd(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod, u8 cmd, + struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, + u64 *out_param, u32 timeout, enum hinic3_cmdq_cmd_type nic_cmd_type) { - struct hinic3_cmdq_wqe wqe; struct hinic3_wq *wq = cmdq->wq; + struct hinic3_cmdq_wqe wqe; struct hinic3_cmdq_wqe *curr_wqe = NULL; - uint16_t curr_prod_idx, next_prod_idx, num_wqebbs; - uint32_t timeo, wqe_size; - int wrapped, err; - wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE); - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq); + u16 curr_prod_idx, next_prod_idx, num_wqebbs; + u32 time; + u64 *direct_resp = NULL; + int err; + + num_wqebbs = (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) ? + NUM_WQEBBS_FOR_CMDQ_WQE : NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; - /* ensure thread safety and maintain wrapped and doorbell index correct. */ + /* Keep wrapped and doorbell index correct */ rte_spinlock_lock(&cmdq->cmdq_lock); curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx); - if (curr_wqe == NULL) { - err = -EBUSY; - goto cmdq_unlock; - } + if (!curr_wqe) { + err = -EBUSY; + goto cmdq_unlock; + } - memset(&wqe, 0, sizeof(wqe)); - wrapped = cmdq->wrapped; + memset(&wqe, 0, sizeof(wqe)); - next_prod_idx = curr_prod_idx + num_wqebbs; - if (next_prod_idx >= wq->q_depth) { - cmdq->wrapped = !cmdq->wrapped; - next_prod_idx -= wq->q_depth; - } + cmdq_sync_wqe_prepare(cmdq, mod, cmd, buf_in, buf_out, curr_wqe, curr_prod_idx, nic_cmd_type); + cmdq->cmd_infos[curr_prod_idx].cmd_type = nic_cmd_type; - cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped, - mod, cmd, curr_prod_idx); - - hinic3_cpu_to_hw(&wqe, wqe_size); + next_prod_idx = curr_prod_idx + num_wqebbs; + if (next_prod_idx >= wq->q_depth) { + cmdq->wrapped = !cmdq->wrapped; + next_prod_idx -= wq->q_depth; + } + cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx); - /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */ - cmdq_wqe_fill(curr_wqe, &wqe); + time = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT); + err = hinic3_cmdq_poll_msg(cmdq, time); + if (err) { + PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", curr_prod_idx); + err = -ETIMEDOUT; + goto cmdq_unlock; + } - cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP; + rte_smp_rmb(); /* Read error code after completion */ - cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx); + if (out_param) { + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + direct_resp = (u64 *)(&curr_wqe->wqe_lcmd.completion.direct_resp); + else + direct_resp = (u64 *)(&curr_wqe->enhanced_cmdq_wqe.completion.sge_resp_lo_addr); - timeo = timeout ? timeout : CMDQ_CMD_TIMEOUT; - err = hinic3_cmdq_poll_msg(cmdq, timeo); - if (err) { - PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x", - curr_prod_idx); - err = -ETIMEDOUT; - goto cmdq_unlock; - } - - rte_smp_rmb(); /*Ensure all cmdq return messages are completed*/ + *out_param = cpu_to_be64(*direct_resp); + } - if (cmdq->errcode[curr_prod_idx]) - err = cmdq->errcode[curr_prod_idx]; + if (cmdq->errcode[curr_prod_idx]) + err = cmdq->errcode[curr_prod_idx]; cmdq_unlock: - rte_spinlock_unlock(&cmdq->cmdq_lock); + rte_spinlock_unlock(&cmdq->cmdq_lock); - return err; + return err; } static int @@ -586,11 +460,11 @@ wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs) return -EBUSY; } -int -hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, - struct hinic3_cmd_buf *buf_in, uint64_t *out_param, uint32_t timeout) +int hinic3_cmdq_direct_resp(void *hwdev, enum hinic3_mod_type mod, u8 cmd, + struct hinic3_cmd_buf *buf_in, + u64 *out_param, u32 timeout) { - struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; + struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs; int err; err = cmdq_params_valid(hwdev, buf_in); @@ -605,8 +479,8 @@ hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, out_param, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, + NULL, out_param, timeout, HINIC3_CMD_TYPE_DIRECT_RESP); } int @@ -628,8 +502,8 @@ hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, ui return err; } - return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, - cmd, buf_in, buf_out, timeout); + return cmdq_sync_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod, cmd, buf_in, buf_out, + NULL, timeout, HINIC3_CMD_TYPE_SGE_RESP); } static void @@ -643,21 +517,23 @@ clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) { struct hinic3_ctrl *ctrl = NULL; uint32_t header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info); - int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN); - uint32_t wqe_size = cmdq_get_wqe_size(buf_len); uint16_t num_wqebbs; - - if (wqe_size == WQE_LCMD_SIZE) - ctrl = &wqe->wqe_lcmd.ctrl; - else - ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; - - /* Clear HW busy bit. */ - ctrl->ctrl_info = 0; + enum data_format df; + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT); + if (df == DATA_SGE) + ctrl = &wqe->wqe_lcmd.ctrl; + else + ctrl = &wqe->inline_wqe.wqe_scmd.ctrl; + ctrl->ctrl_info = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_CMDQ_WQE; + } else { + wqe->enhanced_cmdq_wqe.completion.cs_format = 0; /* clear HW busy bit */ + num_wqebbs = NUM_WQEBBS_FOR_ENHANCE_CMDQ_WQE; + } rte_atomic_thread_fence(rte_memory_order_release); /**< Verify wqe is cleared. */ - num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq); hinic3_put_wqe(cmdq->wq, num_wqebbs); } @@ -735,25 +611,32 @@ static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; - struct hinic3_cmd_cmdq_ctxt cmdq_ctxt; - enum hinic3_cmdq_type cmdq_type; + struct hinic3_cmd_cmdq_ctxt cmdq_ctxt = {0}; + enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; uint16_t out_size = sizeof(cmdq_ctxt); + u16 cmd; int err; - for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { - memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt)); - cmdq_ctxt.ctxt_info = cmdqs->cmdq[cmdq_type].cmdq_ctxt; + for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + if (hwdev->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + memcpy((void *)&cmdq_ctxt.ctxt_info, + (void *)&cmdqs->cmdq[cmdq_type].cmdq_ctxt, + sizeof(cmdq_ctxt.ctxt_info)); + cmd = HINIC3_MGMT_CMD_SET_CMDQ_CTXT; + } else { + memcpy((void *)&cmdq_ctxt.enhance_ctxt_info, + (void *)&cmdqs->cmdq[cmdq_type].cmdq_enhance_ctxt, + sizeof(cmdq_ctxt.enhance_ctxt_info)); + cmd = HINIC3_MGMT_CMD_SET_ENHANCE_CMDQ_CTXT; + } cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev); cmdq_ctxt.cmdq_id = cmdq_type; - err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, - HINIC3_MGMT_CMD_SET_CMDQ_CTXT, + err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, &cmdq_ctxt, sizeof(cmdq_ctxt), - &cmdq_ctxt, &out_size); - + &cmdq_ctxt, &out_size, 0); if (err || !out_size || cmdq_ctxt.status) { - PMD_DRV_LOG(ERR, - "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", + PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x", err, cmdq_ctxt.status, out_size); return -EFAULT; } @@ -794,6 +677,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) cmdqs->cmdqs_db_base = (uint8_t *)db_base; for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) { + cmdqs->cmdq[cmdq_type].cmdqs = cmdqs; err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, &cmdqs->saved_wqs[cmdq_type], cmdq_type); if (err) { @@ -801,8 +685,10 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) goto init_cmdq_err; } - cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], - &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + if (cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) + cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type], &cmdqs->cmdq[cmdq_type].cmdq_ctxt); + else + enhance_cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type]); } err = hinic3_set_cmdq_ctxts(hwdev); @@ -821,7 +707,7 @@ hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs) } int -hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_init(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = NULL; size_t saved_wqs_size; @@ -835,6 +721,13 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) hwdev->cmdqs = cmdqs; cmdqs->hwdev = hwdev; + if (HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev)) + cmdqs->cmdq_mode = HINIC3_ENHANCE_CMDQ; + else + cmdqs->cmdq_mode = HINIC3_NORMAL_CMDQ; + wqebb_shift = (cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) ? + CMDQ_ENHANCE_WQEBB_SHIFT : CMDQ_WQEBB_SHIFT; + saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq); cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0); if (!cmdqs->saved_wqs) { @@ -857,7 +750,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES, - HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT, + HINIC3_CMDQ_WQ_BUF_SIZE, wqebb_shift, HINIC3_CMDQ_DEPTH); if (err) { PMD_DRV_LOG(ERR, "Allocate cmdq failed"); @@ -884,7 +777,7 @@ hinic3_init_cmdqs(struct hinic3_hwdev *hwdev) } void -hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) +hinic3_cmdq_free(struct hinic3_hwdev *hwdev) { struct hinic3_cmdqs *cmdqs = hwdev->cmdqs; enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC; @@ -900,14 +793,36 @@ hinic3_free_cmdqs(struct hinic3_hwdev *hwdev) rte_free(cmdqs); } +static int +hinic3_check_cmdq_done(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe) +{ + struct hinic3_ctrl *ctrl = NULL; + uint32_t ctrl_info; + + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + /* Only arm bit using scmd wqe, the wqe is lcmd. */ + ctrl = &wqe->wqe_lcmd.ctrl; + ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); + + if (!WQE_COMPLETED(ctrl_info)) + return -EBUSY; + } else { + ctrl_info = wqe->enhanced_cmdq_wqe.completion.cs_format; + ctrl_info = hinic3_hw_cpu32(ctrl_info); + + if (!ENHANCE_CMDQ_WQE_CS_GET(ctrl_info, HW_BUSY)) + return -EBUSY; + } + return 0; +} + static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) { struct hinic3_cmdq_wqe *wqe = NULL; struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL; - struct hinic3_ctrl *ctrl = NULL; struct hinic3_cmdq_cmd_info *cmd_info = NULL; - uint32_t status_info, ctrl_info; + uint32_t status_info; uint16_t ci; int errcode; uint64_t end; @@ -928,13 +843,10 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) return -EINVAL; } - /* Only arm bit is using scmd wqe, the wqe is lcmd. */ - wqe_lcmd = &wqe->wqe_lcmd; - ctrl = &wqe_lcmd->ctrl; + /* Only arm bit using scmd wqe, the wqe is lcmd. */ end = cycles + msecs_to_cycles(timeout); do { - ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info); - if (WQE_COMPLETED(ctrl_info)) { + if (hinic3_check_cmdq_done(cmdq, wqe) == 0) { done = 1; break; } @@ -943,8 +855,14 @@ hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, uint32_t timeout) } while (time_before(cycles, end)); if (done) { - status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); - errcode = WQE_ERRCODE_GET(status_info, VAL); + if (cmdq->cmdqs->cmdq_mode == HINIC3_NORMAL_CMDQ) { + wqe_lcmd = &wqe->wqe_lcmd; + status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info); + errcode = WQE_ERRCODE_GET(status_info, VAL); + } else { + status_info = hinic3_hw_cpu32(wqe->enhanced_cmdq_wqe.completion.cs_format); + errcode = ENHANCE_CMDQ_WQE_CS_GET(status_info, ERR_CODE); + } cmdq_update_errcode(cmdq, ci, errcode); clear_wqe_complete_bit(cmdq, wqe); err = 0; diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h index deac909488..75a5dbaab5 100644 --- a/drivers/net/hinic3/base/hinic3_cmdq.h +++ b/drivers/net/hinic3/base/hinic3_cmdq.h @@ -7,31 +7,62 @@ #include "hinic3_mgmt.h" #include "hinic3_wq.h" +#include "hinic3_pmd_cmdq_enhance.h" #define HINIC3_SCMD_DATA_LEN 16 /* Pmd driver uses 64, kernel l2nic uses 4096. */ #define HINIC3_CMDQ_DEPTH 64 -#define HINIC3_CMDQ_BUF_SIZE 2048U +#define HINIC3_CMDQ_BUF_SIZE 1024U #define HINIC3_CEQ_ID_CMDQ 0 -enum cmdq_scmd_type { CMDQ_SET_ARM_CMD = 2 }; +#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -enum cmdq_wqe_type { WQE_LCMD_TYPE = 0, WQE_SCMD_TYPE = 1 }; +enum hinic3_cmdq_mode { + HINIC3_NORMAL_CMDQ, + HINIC3_ENHANCE_CMDQ +}; + +enum cmdq_scmd_type { + CMDQ_SET_ARM_CMD = 2 +}; -enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 }; +enum cmdq_wqe_type { + WQE_LCMD_TYPE, + WQE_SCMD_TYPE +}; + +enum ctrl_sect_len { + CTRL_SECT_LEN = 1, + CTRL_DIRECT_SECT_LEN = 2 +}; -enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 }; +enum bufdesc_len { + BUFDESC_LCMD_LEN = 2, + BUFDESC_SCMD_LEN = 3 +}; -enum data_format { DATA_SGE = 0}; +enum data_format { + DATA_SGE +}; -enum completion_format { COMPLETE_DIRECT = 0, COMPLETE_SGE = 1 }; +enum completion_format { + COMPLETE_DIRECT, + COMPLETE_SGE +}; -enum completion_request { CEQ_SET = 1 }; +enum completion_request { + CEQ_SET = 1 +}; -enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD }; +enum cmdq_cmd_type { + SYNC_CMD_DIRECT_RESP, + SYNC_CMD_SGE_RESP, + ASYNC_CMD +}; enum hinic3_cmdq_type { HINIC3_CMDQ_SYNC, @@ -44,7 +75,10 @@ enum hinic3_db_src_type { HINIC3_DB_SRC_L2NIC_SQ_TYPE }; -enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE }; +enum hinic3_cmdq_db_type { + HINIC3_DB_SQ_RQ_TYPE, + HINIC3_DB_CMDQ_TYPE +}; /* Cmdq ack type. */ enum hinic3_ack_type { @@ -52,7 +86,7 @@ enum hinic3_ack_type { HINIC3_ACK_TYPE_SHARE_CQN = 1, HINIC3_ACK_TYPE_APP_CQN = 2, - HINIC3_MOD_ACK_MAX = 15 + HINIC3_MOD_ACK_MAX = 15 }; /* Cmdq wqe ctrls. */ @@ -126,6 +160,7 @@ struct hinic3_cmdq_wqe { union { struct hinic3_cmdq_inline_wqe inline_wqe; struct hinic3_cmdq_wqe_lcmd wqe_lcmd; + struct enhanced_cmdq_wqe enhanced_cmdq_wqe; }; }; @@ -144,6 +179,7 @@ struct hinic3_cmd_cmdq_ctxt { uint8_t rsvd1[5]; struct hinic3_cmdq_ctxt_info ctxt_info; + struct enhance_cmdq_ctxt_info enhance_ctxt_info; }; enum hinic3_cmdq_status { @@ -173,8 +209,10 @@ struct hinic3_cmdq { rte_spinlock_t cmdq_lock; struct hinic3_cmdq_ctxt_info cmdq_ctxt; + struct enhance_cmdq_ctxt_info cmdq_enhance_ctxt; struct hinic3_cmdq_cmd_info *cmd_infos; + struct hinic3_cmdqs *cmdqs; }; struct hinic3_cmdqs { @@ -188,6 +226,7 @@ struct hinic3_cmdqs { struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES]; uint32_t status; + uint8_t cmdq_mode; }; struct hinic3_cmd_buf { @@ -215,8 +254,8 @@ int hinic3_cmdq_direct_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod int hinic3_cmdq_detail_resp(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, uint8_t cmd, struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out, uint32_t timeout); -int hinic3_init_cmdqs(struct hinic3_hwdev *hwdev); +int hinic3_cmdq_init(struct hinic3_hwdev *hwdev); -void hinic3_free_cmdqs(struct hinic3_hwdev *hwdev); +void hinic3_cmdq_free(struct hinic3_hwdev *hwdev); #endif /* _HINIC3_CMDQ_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.c b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c new file mode 100644 index 0000000000..22a4c81482 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.c @@ -0,0 +1,110 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Huawei Technologies Co., Ltd + */ + +#include <rte_mbuf.h> + +#include "hinic3_compat.h" +#include "hinic3_hwdev.h" +#include "hinic3_hwif.h" +#include "hinic3_wq.h" +#include "hinic3_cmd.h" +#include "hinic3_mgmt.h" +#include "hinic3_cmdq.h" +#include "hinic3_cmdq_enhance.h" + +#define WQ_PREFETCH_MAX 4 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +void enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq) +{ + struct enhance_cmdq_ctxt_info *ctxt_info = &cmdq->cmdq_enhance_ctxt; + struct hinic3_wq *wq = cmdq->wq; + uint64_t cmdq_first_block_paddr, pfn; + uint16_t start_ci = (uint16_t)wq->cons_idx; + uint32_t start_pi = (uint16_t)wq->prod_idx; + + /* The data in HW is Big Endian Format */ + cmdq_first_block_paddr = wq->queue_buf_paddr; + pfn = CMDQ_PFN(cmdq_first_block_paddr, RTE_PGSIZE_4K); + + /* First part 16B */ + ctxt_info->eq_cfg = + ENHANCED_CMDQ_SET(pfn, CTXT0_CI_WQE_ADDR) | + ENHANCED_CMDQ_SET(0, CTXT0_EQ) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_ARM) | + ENHANCED_CMDQ_SET(0, CTXT0_CEQ_EN) | + ENHANCED_CMDQ_SET(1, CTXT0_HW_BUSY_BIT); + + ctxt_info->dfx_pi_ci = + ENHANCED_CMDQ_SET(0, CTXT1_Q_DIS) | + ENHANCED_CMDQ_SET(0, CTXT1_ERR_CODE) | + ENHANCED_CMDQ_SET(start_pi, CTXT1_PI) | + ENHANCED_CMDQ_SET(start_ci, CTXT1_CI); + + /* Second part 16B */ + ctxt_info->pft_thd = + ENHANCED_CMDQ_SET(CI_HIGN_IDX(start_ci), CTXT2_PFT_CI) | + ENHANCED_CMDQ_SET(1, CTXT2_O_BIT) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MIN, CTXT2_PFT_MIN) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_MAX, CTXT2_PFT_MAX) | + ENHANCED_CMDQ_SET(WQ_PREFETCH_THRESHOLD, CTXT2_PFT_THD); + ctxt_info->pft_ci = + ENHANCED_CMDQ_SET(pfn, CTXT3_PFT_CI_ADDR) | + ENHANCED_CMDQ_SET(start_ci, CTXT3_PFT_CI); + + /* Third part 16B */ + cmdq_first_block_paddr = cmdq_first_block_paddr; + pfn = WQ_BLOCK_PFN(cmdq_first_block_paddr); + + ctxt_info->ci_cla_addr = ENHANCED_CMDQ_SET(pfn, CTXT4_CI_CLA_ADDR); +} + +static void enhance_cmdq_set_completion(struct cmdq_enhance_completion *completion, + const struct hinic3_cmd_buf *buf_out) +{ + completion->sge_resp_hi_addr = upper_32_bits(buf_out->dma_addr); + completion->sge_resp_lo_addr = lower_32_bits(buf_out->dma_addr); + completion->sge_resp_len = HINIC3_CMDQ_BUF_SIZE; +} + +void enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, + enum cmdq_cmd_type cmd_type, + const struct hinic3_cmd_buf *buf_in, + const struct hinic3_cmd_buf *buf_out, int wrapped, + uint8_t mod, uint8_t cmd) +{ + struct enhanced_cmdq_wqe *enhanced_wqe = &wqe->enhanced_cmdq_wqe; + + enhanced_wqe->ctrl_sec.header = + ENHANCE_CMDQ_WQE_HEADER_SET(buf_in->size, SEND_SGE_LEN) | + ENHANCE_CMDQ_WQE_HEADER_SET(1, BDSL) | + ENHANCE_CMDQ_WQE_HEADER_SET(DATA_SGE, DF) | + ENHANCE_CMDQ_WQE_HEADER_SET(NORMAL_WQE_TYPE, DN) | + ENHANCE_CMDQ_WQE_HEADER_SET(COMPACT_WQE_TYPE, EC) | + ENHANCE_CMDQ_WQE_HEADER_SET((uint32_t)wrapped, HW_BUSY_BIT); + + enhanced_wqe->ctrl_sec.sge_send_hi_addr = upper_32_bits(buf_in->dma_addr); + enhanced_wqe->ctrl_sec.sge_send_lo_addr = lower_32_bits(buf_in->dma_addr); + + enhanced_wqe->completion.cs_format = + ENHANCE_CMDQ_WQE_CS_SET(cmd, CMD) | + ENHANCE_CMDQ_WQE_CS_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE) | + ENHANCE_CMDQ_WQE_CS_SET(mod, MOD); + + switch (cmd_type) { + case SYNC_CMD_DIRECT_RESP: + enhanced_wqe->completion.cs_format |= ENHANCE_CMDQ_WQE_CS_SET(INLINE_DATA, CF); + break; + case SYNC_CMD_SGE_RESP: + if (buf_out) { + enhanced_wqe->completion.cs_format |= + ENHANCE_CMDQ_WQE_CS_SET(SGE_RESPONSE, CF); + enhance_cmdq_set_completion(&enhanced_wqe->completion, buf_out); + } + break; + case ASYNC_CMD: + break; + } +} diff --git a/drivers/net/hinic3/base/hinic3_cmdq_enhance.h b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h new file mode 100644 index 0000000000..c762c07eb5 --- /dev/null +++ b/drivers/net/hinic3/base/hinic3_cmdq_enhance.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_CMDQ_ENHANCE_H_ +#define _HINIC3_CMDQ_ENHANCE_H_ + +#include "hinic3_mgmt.h" + +#define NORMAL_WQE_TYPE 0 +#define COMPACT_WQE_TYPE 1 + +/* First part 16B */ +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT0_RSV1_SHIFT 52 +#define ENHANCED_CMDQ_CTXT0_EQ_SHIFT 53 +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_SHIFT 61 +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_SHIFT 62 +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_SHIFT 63 + +#define ENHANCED_CMDQ_CTXT0_CI_WQE_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT0_RSV1_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_EQ_MASK 0xFFU +#define ENHANCED_CMDQ_CTXT0_CEQ_ARM_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_CEQ_EN_MASK 0x1U +#define ENHANCED_CMDQ_CTXT0_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_SHIFT 0 +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_SHIFT 1 +#define ENHANCED_CMDQ_CTXT1_RSV1_SHIFT 3 +#define ENHANCED_CMDQ_CTXT1_PI_SHIFT 32 +#define ENHANCED_CMDQ_CTXT1_CI_SHIFT 48 + +#define ENHANCED_CMDQ_CTXT1_Q_DIS_MASK 0x1U +#define ENHANCED_CMDQ_CTXT1_ERR_CODE_MASK 0x3U +#define ENHANCED_CMDQ_CTXT1_RSV1_MASK 0x1FFFFFFFU +#define ENHANCED_CMDQ_CTXT1_PI_MASK 0xFFFFU +#define ENHANCED_CMDQ_CTXT1_CI_MASK 0xFFFFU + +/* Second part 16B */ +#define ENHANCED_CMDQ_CTXT2_PFT_CI_SHIFT 0 +#define ENHANCED_CMDQ_CTXT2_O_BIT_SHIFT 4 +#define ENHANCED_CMDQ_CTXT2_PFT_THD_SHIFT 32 +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_SHIFT 46 +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_SHIFT 57 + +#define ENHANCED_CMDQ_CTXT2_PFT_CI_MASK 0xFU +#define ENHANCED_CMDQ_CTXT2_O_BIT_MASK 0x1U +#define ENHANCED_CMDQ_CTXT2_PFT_THD_MASK 0x3FFFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MAX_MASK 0x7FFFU +#define ENHANCED_CMDQ_CTXT2_PFT_MIN_MASK 0x7FU + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT3_PFT_CI_SHIFT 52 + +#define ENHANCED_CMDQ_CTXT3_PFT_CI_ADDR_MASK 0xFFFFFFFFFFFFFU +#define ENHANCED_CMDQ_CTXT3_PFT_CI_MASK 0xFFFFU + +/* Third part 16B */ +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_SHIFT 0 +#define ENHANCED_CMDQ_CTXT4_CI_CLA_ADDR_MASK 0x7FFFFFFFFFFFFFU + +#define ENHANCED_CMDQ_SET(val, member) \ + (((uint64_t)(val) & ENHANCED_CMDQ_##member##_MASK) << \ + ENHANCED_CMDQ_##member##_SHIFT) + +#define CI_IDX_HIGH_SHIFH 12 +#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_SHIFT 0 +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_SHIFT 19 +#define ENHANCE_CMDQ_WQE_HEADER_DF_SHIFT 28 +#define ENHANCE_CMDQ_WQE_HEADER_DN_SHIFT 29 +#define ENHANCE_CMDQ_WQE_HEADER_EC_SHIFT 30 +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_HEADER_SEND_SGE_LEN_MASK 0x3FFFFU +#define ENHANCE_CMDQ_WQE_HEADER_BDSL_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_HEADER_DF_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_DN_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_EC_MASK 0x1U +#define ENHANCE_CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_HEADER_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_HEADER_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_HEADER_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_HEADER_##member##_MASK) + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_SHIFT 0 +#define ENHANCE_CMDQ_WQE_CS_CMD_SHIFT 4 +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_SHIFT 12 +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_SHIFT 14 +#define ENHANCE_CMDQ_WQE_CS_MOD_SHIFT 16 +#define ENHANCE_CMDQ_WQE_CS_CF_SHIFT 31 + +#define ENHANCE_CMDQ_WQE_CS_ERR_CODE_MASK 0xFU +#define ENHANCE_CMDQ_WQE_CS_CMD_MASK 0xFFU +#define ENHANCE_CMDQ_WQE_CS_ACK_TYPE_MASK 0x3U +#define ENHANCE_CMDQ_WQE_CS_HW_BUSY_MASK 0x1U +#define ENHANCE_CMDQ_WQE_CS_MOD_MASK 0x1FU +#define ENHANCE_CMDQ_WQE_CS_CF_MASK 0x1U + +#define ENHANCE_CMDQ_WQE_CS_SET(val, member) \ + ((((uint32_t)(val)) & ENHANCE_CMDQ_WQE_CS_##member##_MASK) << \ + ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) + +#define ENHANCE_CMDQ_WQE_CS_GET(val, member) \ + (((val) >> ENHANCE_CMDQ_WQE_CS_##member##_SHIFT) & \ + ENHANCE_CMDQ_WQE_CS_##member##_MASK) + +enum complete_format { + INLINE_DATA, + SGE_RESPONSE +}; + +struct cmdq_enhance_completion { + uint32_t cs_format; + uint32_t sge_resp_hi_addr; + uint32_t sge_resp_lo_addr; + uint32_t sge_resp_len; /* bit 14~31 rsvd, soft can't use. */ +}; + +struct cmdq_enhance_response { + uint32_t cs_format; + uint32_t resvd; + uint64_t direct_data; +}; + +struct sge_send_info { + uint32_t sge_hi_addr; + uint32_t sge_li_addr; + uint32_t seg_len; + uint32_t rsvd; +}; + +struct ctrl_section { + uint32_t header; + uint32_t rsv; + uint32_t sge_send_hi_addr; + uint32_t sge_send_lo_addr; +}; + +struct enhanced_cmdq_wqe { + struct ctrl_section ctrl_sec; /* 16B */ + struct cmdq_enhance_completion completion; /* 16B */ +}; + +/* Enhance cmdq context of hardware */ +struct enhance_cmdq_ctxt_info { + uint64_t eq_cfg; + uint64_t dfx_pi_ci; + + uint64_t pft_thd; + uint64_t pft_ci; + + uint64_t rsv; + uint64_t ci_cla_addr; +}; + +void enhance_cmdq_set_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type, + const struct hinic3_cmd_buf *buf_in, const struct hinic3_cmd_buf *buf_out, + int wrapped, uint8_t mod, uint8_t cmd); + +void enhance_cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq); + +#endif /*_HINIC3_CMDQ_ENHANCE_H_ */ diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c index d259b88a2d..6541bc0428 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.c +++ b/drivers/net/hinic3/base/hinic3_hw_comm.c @@ -12,7 +12,7 @@ #include "hinic3_wq.h" #include "hinic3_nic_cfg.h" -/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */ +/* Buffer sizes must be in ascending order. */ const uint32_t hinic3_hw_rx_buf_size[] = { HINIC3_RX_BUF_SIZE_32B, HINIC3_RX_BUF_SIZE_64B, @@ -239,11 +239,14 @@ hinic3_convert_rx_buf_size(uint32_t rx_buf_sz, uint32_t *match_sz) } static uint16_t -get_hw_rx_buf_size(uint32_t rx_buf_sz) +get_hw_rx_buf_size(struct hinic3_hwdev *hwdev, uint32_t rx_buf_sz) { uint16_t num_hw_types = RTE_DIM(hinic3_hw_rx_buf_size); uint16_t i; + if (HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev)) + return rx_buf_sz; + for (i = 0; i < num_hw_types; i++) { if (hinic3_hw_rx_buf_size[i] == rx_buf_sz) return i; @@ -271,8 +274,12 @@ hinic3_set_root_ctxt(struct hinic3_hwdev *hwdev, uint32_t rq_depth, root_ctxt.cmdq_depth = 0; root_ctxt.lro_en = 1; root_ctxt.rq_depth = (uint16_t)rte_log2_u32(rq_depth); - root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz); + root_ctxt.rx_buf_sz = get_hw_rx_buf_size(hwdev, rx_buf_sz); root_ctxt.sq_depth = (uint16_t)rte_log2_u32(sq_depth); + root_ctxt.cmdq_mode = hwdev->cmdqs->cmdq_mode; + + if (hwdev->cmdqs->cmdq_mode == HINIC3_ENHANCE_CMDQ) + root_ctxt.cmdq_depth--; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, @@ -403,7 +410,7 @@ hinic3_comm_features_nego(struct hinic3_hwdev *hwdev, uint16_t out_size = sizeof(feature_nego); int err; - if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD) + if (!hwdev || !s_feature || size > HINIC3_MAX_FEATURE_QWORD) return -EINVAL; memset(&feature_nego, 0, sizeof(feature_nego)); diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h index b86f5aad8f..186a826ce1 100644 --- a/drivers/net/hinic3/base/hinic3_hw_comm.h +++ b/drivers/net/hinic3/base/hinic3_hw_comm.h @@ -9,17 +9,17 @@ #define HINIC3_MGMT_CMD_OP_GET 0 #define HINIC3_MGMT_CMD_OP_SET 1 -#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 -#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 -#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 -#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 -#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 - -#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU -#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU -#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU -#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U +#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0 +#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8 +#define HINIC3_MSIX_CNT_COALESCE_TIMER_SHIFT 8 +#define HINIC3_MSIX_CNT_PENDING_SHIFT 8 +#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29 + +#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU +#define HINIC3_MSIX_CNT_COALESCE_TIMER_MASK 0xFFU +#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU +#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U #define HINIC3_MSIX_CNT_SET(val, member) \ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \ @@ -129,7 +129,7 @@ struct hinic3_cmd_root_ctxt { uint8_t cmdq_depth; uint16_t rx_buf_sz; uint8_t lro_en; - uint8_t rsvd1; + uint8_t cmdq_mode; uint16_t sq_depth; uint16_t rq_depth; uint64_t rsvd2; @@ -143,17 +143,16 @@ enum hinic3_fw_ver_type { HINIC3_FW_VER_TYPE_CFG, }; -#define MGMT_MSG_CMD_OP_SET 1 -#define MGMT_MSG_CMD_OP_GET 0 +#define MGMT_MSG_CMD_OP_SET 1 +#define MGMT_MSG_CMD_OP_GET 0 -#define COMM_MAX_FEATURE_QWORD 4 struct comm_cmd_feature_nego { struct mgmt_msg_head head; uint16_t func_id; uint8_t opcode; /**< 1: set, 0: get. */ uint8_t rsvd; - uint64_t s_feature[COMM_MAX_FEATURE_QWORD]; + uint64_t s_feature[HINIC3_MAX_FEATURE_QWORD]; }; #define HINIC3_FW_VERSION_LEN 16 diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c index 668bbf4a0e..6e1b1372a5 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.c +++ b/drivers/net/hinic3/base/hinic3_hwdev.c @@ -261,7 +261,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) { int err; - err = hinic3_init_cmdqs(hwdev); + err = hinic3_cmdq_init(hwdev); if (err) { PMD_DRV_LOG(ERR, "Init cmd queues failed"); return err; @@ -276,7 +276,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) return 0; set_cmdq_depth_err: - hinic3_free_cmdqs(hwdev); + hinic3_cmdq_free(hwdev); return err; } @@ -284,7 +284,7 @@ hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev) static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev) { - hinic3_free_cmdqs(hwdev); + hinic3_cmdq_free(hwdev); } static void @@ -426,6 +426,12 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) goto func_reset_err; } + err = hinic3_get_comm_features(hwdev, hwdev->features, HINIC3_MAX_FEATURE_QWORD); + if (err) { + PMD_DRV_LOG(ERR, "Get comm features failed"); + goto get_common_features_err; + } + err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1); if (err) goto set_used_state_err; @@ -444,6 +450,7 @@ hinic3_init_comm_ch(struct hinic3_hwdev *hwdev) hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0); set_used_state_err: func_reset_err: +get_common_features_err: get_func_info_err: free_mgmt_channel(hwdev); diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h index 161f1e2de5..c6661aa1a6 100644 --- a/drivers/net/hinic3/base/hinic3_hwdev.h +++ b/drivers/net/hinic3/base/hinic3_hwdev.h @@ -23,6 +23,18 @@ enum hinic3_set_arm_type { HINIC3_SET_ARM_TYPE_NUM }; +enum { + HINIC3_F_API_CHAIN = 1U << 0, + HINIC3_F_CLP = 1U << 1, + HINIC3_F_CHANNEL_DETECT = 1U << 2, + HINIC3_F_MBOX_SEGMENT = 1U << 3, + HINIC3_F_CMDQ_NUM = 1U << 4, + HINIC3_F_VIRTIO_VQ_SIZE = 1U << 5, + HINIC3_F_EXTEND_CAP = 1U << 6, + HINIC3_F_SMF_CACHE_INVALID = 1U << 7, + HINIC3_F_ONLY_ENHANCE_CMDQ = 1U << 8, + HINIC3_F_USE_REAL_RX_BUF_SIZE = 1U << 9, +}; struct hinic3_page_addr { void *virt_addr; uint64_t phys_addr; @@ -78,6 +90,11 @@ struct hinic3_hw_stats { #define HINIC3_CHIP_FAULT_SIZE (110 * 1024) #define MAX_DRV_BUF_SIZE 4096 +#define HINIC3_SUPPORT_ONLY_ENHANCE_CMDQ(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_ONLY_ENHANCE_CMDQ) +#define HINIC3_IS_USE_REAL_RX_BUF_SIZE(hwdev) \ + (((struct hinic3_hwdev *)hwdev)->features[0] & HINIC3_F_USE_REAL_RX_BUF_SIZE) + struct nic_cmd_chip_fault_stats { uint32_t offset; uint8_t chip_fault_stats[MAX_DRV_BUF_SIZE]; @@ -141,6 +158,7 @@ struct hinic3_hwdev { uint16_t max_vfs; uint16_t link_status; + uint64_t features[HINIC3_MAX_FEATURE_QWORD]; }; bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev); diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c index 5db6d49922..b1f850dfff 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.c +++ b/drivers/net/hinic3/base/hinic3_mgmt.c @@ -13,6 +13,8 @@ #define SEGMENT_LEN 48 #define MGMT_MSG_MAX_SEQ_ID \ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN) +#define MGMT_MSG_LAST_SEG_MAX_LEN \ + (MAX_PF_MGMT_BUF_SIZE - SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID) #define BUF_OUT_DEFAULT_SIZE 1 @@ -34,7 +36,8 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, uint8_t seq_id, uint8_t seg_len, uint16_t msg_id) { - if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN) + if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN || + (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)) return false; if (seq_id == 0) { diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h index f8148406d3..4e77b9bec4 100644 --- a/drivers/net/hinic3/base/hinic3_mgmt.h +++ b/drivers/net/hinic3/base/hinic3_mgmt.h @@ -70,6 +70,8 @@ typedef enum { #define HINIC3_TOE_RES (1 << RES_TYPE_TOE) #define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC) +#define HINIC3_MAX_FEATURE_QWORD 4 + struct hinic3_recv_msg { void *msg; diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index c35fefdeac..0bee1ae3fc 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -48,6 +48,43 @@ static const struct vf_msg_handler vf_mag_cmd_handler[] = { }, }; +int +hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, + uint16_t cmd, void *buf_in, uint16_t in_size, + void *buf_out, uint16_t *out_size) +{ + uint32_t i; + bool cmd_to_pf = false; + struct hinic3_handler_info handler_info = { + .cmd = cmd, + .buf_in = buf_in, + .in_size = in_size, + .buf_out = buf_out, + .out_size = out_size, + .dst_func = HINIC3_MGMT_SRC_ID, + .direction = HINIC3_MSG_DIRECT_SEND, + .ack_type = HINIC3_MSG_ACK, + }; + + if (hinic3_func_type(hwdev) == TYPE_VF) { + if (mod == HINIC3_MOD_HILINK) { + for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { + if (cmd == vf_mag_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } else if (mod == HINIC3_MOD_L2NIC) { + for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { + if (cmd == vf_cmd_handler[i].cmd) + cmd_to_pf = true; + } + } + } + if (cmd_to_pf) + handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); + + return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); +} + /** * Set CI table for a SQ. * @@ -1712,43 +1749,6 @@ hinic3_set_rq_flush(struct hinic3_hwdev *hwdev, uint16_t q_id) return err; } -int -hinic3_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, - uint16_t cmd, void *buf_in, uint16_t in_size, - void *buf_out, uint16_t *out_size) -{ - uint32_t i; - bool cmd_to_pf = false; - struct hinic3_handler_info handler_info = { - .cmd = cmd, - .buf_in = buf_in, - .in_size = in_size, - .buf_out = buf_out, - .out_size = out_size, - .dst_func = HINIC3_MGMT_SRC_ID, - .direction = HINIC3_MSG_DIRECT_SEND, - .ack_type = HINIC3_MSG_ACK, - }; - - if (hinic3_func_type(hwdev) == TYPE_VF) { - if (mod == HINIC3_MOD_HILINK) { - for (i = 0; i < RTE_DIM(vf_mag_cmd_handler); i++) { - if (cmd == vf_mag_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } else if (mod == HINIC3_MOD_L2NIC) { - for (i = 0; i < RTE_DIM(vf_cmd_handler); i++) { - if (cmd == vf_cmd_handler[i].cmd) - cmd_to_pf = true; - } - } - } - if (cmd_to_pf) - handler_info.dst_func = hinic3_pf_id_of_vf(hwdev); - - return hinic3_send_mbox_to_mgmt(hwdev, mod, &handler_info, 0); -} - int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status) diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build index 48ac7a47f5..30c0a1c9d6 100644 --- a/drivers/net/hinic3/base/meson.build +++ b/drivers/net/hinic3/base/meson.build @@ -2,6 +2,7 @@ # Copyright(c) 2025 Huawei Technologies Co., Ltd base_sources = files( + 'hinic3_cmdq_enhance.c' 'hinic3_cmdq.c', 'hinic3_eqs.c', 'hinic3_hw_cfg.c', -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 3/7] net/hinic3: use different callback func to split new/old cmdq operations 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-01-31 10:05 ` [PATCH 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC Feifei Wang @ 2026-01-31 10:05 ` Feifei Wang 2026-01-31 10:06 ` [PATCH 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:05 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx series NIC with enhance cmdq, it send control message to hardware tile in NIC(htn), this is different from previous SPx NIC, which send control message to software tile in NIC(stn). Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 50 ++---- drivers/net/hinic3/base/hinic3_nic_cfg.h | 82 +++++---- drivers/net/hinic3/hinic3_ethdev.c | 16 +- drivers/net/hinic3/hinic3_nic_io.h | 122 +++++++++++++ drivers/net/hinic3/hinic3_rx.c | 3 +- .../net/hinic3/htn_adapt/hinic3_htn_cmdq.c | 163 ++++++++++++++++++ .../net/hinic3/htn_adapt/hinic3_htn_cmdq.h | 55 ++++++ drivers/net/hinic3/htn_adapt/meson.build | 7 + .../net/hinic3/stn_adapt/hinic3_stn_cmdq.c | 147 ++++++++++++++++ .../net/hinic3/stn_adapt/hinic3_stn_cmdq.h | 38 ++++ drivers/net/hinic3/stn_adapt/meson.build | 7 + 11 files changed, 616 insertions(+), 74 deletions(-) create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c create mode 100644 drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h create mode 100644 drivers/net/hinic3/htn_adapt/meson.build create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c create mode 100644 drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h create mode 100644 drivers/net/hinic3/stn_adapt/meson.build diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index 0bee1ae3fc..f12a2aedee 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -11,6 +11,7 @@ #include "hinic3_mbox.h" #include "hinic3_nic_cfg.h" #include "hinic3_wq.h" +#include "hinic3_nic_io.h" struct vf_msg_handler { uint16_t cmd; @@ -439,6 +440,7 @@ int hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) { struct hinic3_vport_state en_state; + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev*)hwdev->dev_handle; uint16_t out_size = sizeof(en_state); int err; @@ -448,6 +450,7 @@ hinic3_set_vport_enable(struct hinic3_hwdev *hwdev, bool enable) memset(&en_state, 0, sizeof(en_state)); en_state.func_id = hinic3_global_func_id(hwdev); en_state.state = enable ? 1 : 0; + en_state.num_qps = nic_dev->num_rqs; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_VPORT_ENABLE, @@ -1156,13 +1159,12 @@ hinic3_rss_set_hash_key(struct hinic3_hwdev *hwdev, uint8_t *key, uint16_t key_s } int -hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, - uint32_t *indir_table, uint32_t indir_table_size) +hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table) { struct hinic3_cmd_buf *cmd_buf = NULL; - uint16_t *indir_tbl = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; int err; - uint32_t i; if (!hwdev || !indir_table) return -EINVAL; @@ -1174,31 +1176,28 @@ hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, } cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE, - cmd_buf, cmd_buf, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_get_rss_indir_table(nic_dev, cmd_buf); + err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, cmd_buf, 0); if (err) { PMD_DRV_LOG(ERR, "Get rss indir table failed"); hinic3_free_cmd_buf(cmd_buf); return err; } - indir_tbl = (uint16_t *)cmd_buf->buf; - for (i = 0; i < indir_table_size; i++) - indir_table[i] = *(indir_tbl + i); + nic_dev->cmdq_ops->cmd_buf_to_rss_indir_table(cmd_buf,indir_table); hinic3_free_cmd_buf(cmd_buf); return 0; } int -hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size) +hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table) { - struct nic_rss_indirect_tbl *indir_tbl = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - uint32_t i, size; - uint32_t *temp = NULL; + struct hinic3_nic_dev *nic_dev = NULL; + uint8_t cmd; uint64_t out_param = 0; int err; @@ -1211,22 +1210,9 @@ hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table return -ENOMEM; } - cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); - indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; - memset(indir_tbl, 0, sizeof(*indir_tbl)); - - for (i = 0; i < indir_table_size; i++) - indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - size = sizeof(indir_tbl->entry) / sizeof(uint16_t); - temp = (uint32_t *)indir_tbl->entry; - for (i = 0; i < size; i++) - temp[i] = rte_cpu_to_be_32(temp[i]); - - err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE, - cmd_buf, &out_param, 0); + nic_dev = (struct hinic3_nic_dev *)hwdev->dev_handle; + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_set_rss_indir_table(nic_dev, indir_table, cmd_buf); + err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC, cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set rss indir table failed"); err = -EFAULT; @@ -1474,7 +1460,7 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EIO; } - *cos_id = vf_dcb.state.default_cos; + *cos_id = vf_dcb.state.default_cos % HINIC3_COS_NUM_MAX_HTN; return 0; } diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index a88d62333d..34372a0678 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -14,16 +14,18 @@ #define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1) #define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1) -#define HINIC3_DCB_UP_MAX 0x8 +#define HINIC3_DCB_UP_MAX 0x8 -#define HINIC3_MAX_NUM_RQ 256 +#define HINIC3_MAX_NUM_RQ 256 -#define HINIC3_MAX_MTU_SIZE 9600 -#define HINIC3_MIN_MTU_SIZE 256 +#define HINIC3_MAX_MTU_SIZE 9600 +#define HINIC3_MIN_MTU_SIZE 256 -#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX 8 +#define HINIC3_COS_NUM_MAX_HTN 4 -#define HINIC3_VLAN_TAG_SIZE 4 + +#define HINIC3_VLAN_TAG_SIZE 4 #define HINIC3_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2) @@ -34,28 +36,41 @@ #define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen) -#define HINIC3_PF_SET_VF_ALREADY 0x4 -#define HINIC3_MGMT_STATUS_EXIST 0x6 -#define CHECK_IPSU_15BIT 0x8000 +#define HINIC3_PF_SET_VF_ALREADY 0x4 +#define HINIC3_MGMT_STATUS_EXIST 0x6 +#define CHECK_IPSU_15BIT 0x8000 -#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB -#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC +#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB +#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC -#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF +#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF -#define HINIC3_MAX_UC_MAC_ADDRS 128 -#define HINIC3_MAX_MC_MAC_ADDRS 2048 +#define HINIC3_MAX_UC_MAC_ADDRS 128 +#define HINIC3_MAX_MC_MAC_ADDRS 2048 -#define CAP_INFO_MAX_LEN 512 -#define VENDOR_MAX_LEN 17 +#define CAP_INFO_MAX_LEN 512 +#define VENDOR_MAX_LEN 17 /* Structures for RSS config. */ -#define HINIC3_RSS_INDIR_SIZE 256 -#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 -#define HINIC3_RSS_KEY_SIZE 40 -#define HINIC3_RSS_ENABLE 0x01 -#define HINIC3_RSS_DISABLE 0x00 -#define HINIC3_INVALID_QID_BASE 0xffff +#define HINIC3_RSS_INDIR_SIZE 256 +#define HINIC3_RSS_INDIR_CMDQ_SIZE 128 +#define HINIC3_RSS_KEY_SIZE 40 +#define HINIC3_RSS_ENABLE 0x01 +#define HINIC3_RSS_DISABLE 0x00 +#define HINIC3_INVALID_QID_BASE 0xffff + +#define HINIC3_SUPPORT_FEATURE(dev, feature) \ + ((hinic3_get_driver_feature(dev) & NIC_F_##feature) != 0) +#define HINIC3_SUPPORT_RX_HW_COMPACT_CQE(dev) \ + HINIC3_SUPPORT_FEATURE(dev, RX_HW_COMPACT_CQE) +#define HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(dev) \ + HINIC3_SUPPORT_FEATURE(dev, TX_WQE_COMPACT_TASK) +#define HINIC3_SUPPORT_VXLAN_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, VXLAN_OFFLOAD) +#define HINIC3_SUPPORT_GENEVE_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, GENEVE_OFFLOAD) +#define HINIC3_SUPPORT_IPXIP_OFFLOAD(dev) \ + HINIC3_SUPPORT_FEATURE(dev, IPXIP_OFFLOAD) struct hinic3_rss_type { uint8_t tcp_ipv6_ext; @@ -312,7 +327,9 @@ struct hinic3_vport_state { uint16_t func_id; uint16_t rsvd1; uint8_t state; /**< 0:disable, 1:enable. */ - uint8_t rsvd2[3]; + uint8_t num_qps; + uint8_t rx_compact_wqe_en; + uint8_t rsvd2; }; #define MAG_CMD_PORT_DISABLE 0x0 @@ -382,7 +399,7 @@ struct hinic3_cmd_vport_stats { uint32_t stats_size; uint32_t rsvd1; struct hinic3_vport_stats stats; - uint64_t rsvd2[6]; + uint64_t rsvd2[5]; }; struct hinic3_phy_port_stats { @@ -670,12 +687,15 @@ enum hinic3_func_tbl_cfg_bitmap { FUNC_CFG_INIT, FUNC_CFG_RX_BUF_SIZE, FUNC_CFG_MTU, + FUNC_CFG_RX_COMPACT_WQE_EN, /**< Enable 8Byte WQE. */ }; struct hinic3_func_tbl_cfg { uint16_t rx_wqe_buf_size; uint16_t mtu; - uint32_t rsvd[9]; + uint8_t rx_compact_wqe_en; /**< Enable Rx 8Byte compact WQE. */ + uint8_t rsvd0[3]; + uint32_t rsvd[8]; }; struct hinic3_cmd_set_func_tbl { @@ -895,7 +915,7 @@ struct hinic3_set_fdir_ethertype_rule { struct mgmt_msg_head head; uint16_t func_id; - uint16_t rsvd1; + uint16_t index; uint8_t pkt_type_en; uint8_t pkt_type; uint8_t qid; @@ -1231,14 +1251,11 @@ int hinic3_rss_template_free(struct hinic3_hwdev *hwdev); * Device pointer to hwdev. * @param[in] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_table); /** * Get RSS indirect table. @@ -1247,14 +1264,11 @@ int hinic3_rss_set_indir_tbl(struct hinic3_hwdev *hwdev, const uint32_t *indir_t * Device pointer to hwdev. * @param[out] indir_table * RSS indirect table. - * @param[in] indir_table_size - * RSS indirect table size. * * @return * 0 on success, non-zero on failure. */ -int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table, - uint32_t indir_table_size); +int hinic3_rss_get_indir_tbl(struct hinic3_hwdev *hwdev, uint32_t *indir_table); /** * Set RSS type. diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index a5116264b0..ecdfcd5654 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -15,6 +15,8 @@ #include "base/hinic3_hw_comm.h" #include "base/hinic3_nic_cfg.h" #include "base/hinic3_nic_event.h" +#include "htn_adapt/hinic3_htn_cmdq.h" +#include "stn_adapt/hinic3_stn_cmdq.h" #include "hinic3_nic_io.h" #include "hinic3_tx.h" #include "hinic3_rx.h" @@ -2581,8 +2583,7 @@ hinic3_rss_reta_query(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) { PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err); return err; @@ -2630,8 +2631,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, return -EINVAL; } - err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl); if (err) return err; @@ -2652,8 +2652,7 @@ hinic3_rss_reta_update(struct rte_eth_dev *dev, } } - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl); if (err) PMD_DRV_LOG(ERR, "Set RSS reta table failed"); @@ -3391,6 +3390,11 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) goto get_cap_fail; } + if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) + nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + else + nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s", diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index db5802e4b7..697e781bd0 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -6,6 +6,7 @@ #define _HINIC3_NIC_IO_H_ #include "hinic3_ethdev.h" +#include "base/hinic3_cmdq.h" #define HINIC3_SQ_WQEBB_SHIFT 4 #define HINIC3_RQ_WQEBB_SHIFT 3 @@ -25,6 +26,13 @@ #define HINIC3_CI_PADDR(base_paddr, q_id) \ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE) +#define HINIC3_Q_CTXT_MAX ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) + +#define SQ_CTXT_SIZE(num_sqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_sqs) * sizeof(struct hinic3_sq_ctxt))) +#define RQ_CTXT_SIZE(num_rqs) ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) \ + + (num_rqs) * sizeof(struct hinic3_rq_ctxt))) + enum hinic3_rq_wqe_type { HINIC3_COMPACT_RQ_WQE, HINIC3_NORMAL_RQ_WQE, @@ -37,12 +45,111 @@ enum hinic3_queue_type { HINIC3_MAX_QUEUE_TYPE, }; +/* Prepare cmd to clean tso/lro space */ +typedef uint8_t (*prepare_cmd_buf_clean_tso_lro_space_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type); +/* Prepare cmd to store RQ and TQ ctxt */ +typedef uint8_t (*prepare_cmd_buf_qp_context_multi_store_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts); +/* Prepare cmd to modify vlan tag */ +typedef uint8_t (*prepare_cmd_buf_modify_svlan_t)(struct hinic3_cmd_buf *cmd_buf, uint16_t func_id, + uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode); +/* Prepare cmd to set RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_set_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf); +/* Prepare cmd to get RSS indir table */ +typedef uint8_t (*prepare_cmd_buf_get_rss_indir_table_t)(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf); +/* Configure RSS indir table */ +typedef void (*cmd_buf_to_rss_indir_table_t)(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table); + +struct hinic3_nic_cmdq_ops { + prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space + prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store + prepare_cmd_buf_modify_svlan_t prepare_cmd_buf_modify_svlan + prepare_cmd_buf_set_rss_indir_table_t prepare_cmd_buf_set_rss_indir_table + prepare_cmd_buf_get_rss_indir_table_t prepare_cmd_buf_get_rss_indir_table + cmd_buf_to_rss_indir_table_t cmd_buf_to_rss_indir_table +}; + /* Doorbell info. */ struct hinic3_db { uint32_t db_info; uint32_t pi_hi; }; +struct hinic3_sq_ctxt { + uint32_t ci_pi; + uint32_t drop_mode_sp; + uint32_t wq_pfn_hi_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd0; + uint32_t pkt_drop_thd; + uint32_t global_sq_id; + uint32_t vlan_ceq_attr; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t rsvd8; + uint32_t rsvd9; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_ctxt { + uint32_t ci_pi; + uint32_t ceq_attr; + uint32_t wq_pfn_hi_type_owner; + uint32_t wq_pfn_lo; + + uint32_t rsvd[3]; + uint32_t cqe_sge_len; + + uint32_t pref_cache; + uint32_t pref_ci_owner; + uint32_t pref_wq_pfn_hi_ci; + uint32_t pref_wq_pfn_lo; + + uint32_t pi_paddr_hi; + uint32_t pi_paddr_lo; + uint32_t wq_block_pfn_hi; + uint32_t wq_block_pfn_lo; +}; + +struct hinic3_rq_cqe_ctx { + struct mgmt_msg_head msg_head; + + uint8_t cqe_type; + uint8_t rq_id; + uint8_t threshold_cqe_num; + uint8_t rsvd1; + + uint16_t msix_entry_idx; + uint16_t rsvd2; + + uint32_t ci_addr_hi; + uint32_t ci_addr_lo; + + uint16_t timer_loop; + uint16_t rsvd3; +}; + +struct hinic3_rq_enable { + struct mgmt_msg_head msg_head; + + uint32_t rq_id; + uint8_t rq_enable; + uint8_t rsvd[3]; +}; + #define DB_INFO_QID_SHIFT 0 #define DB_INFO_NON_FILTER_SHIFT 22 #define DB_INFO_CFLAG_SHIFT 23 @@ -142,6 +249,21 @@ int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev); */ void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev); +/** + * Get cmdq ops software tile NIC(stn) supported. + * + * @return + * Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void); + +/** + * Get cmdq ops hardware tile NIC(htn) supported. + * + * @retval Pointer to ops. + */ +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void); + /** * Update driver feature capabilities. * diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index e8e417b474..3d5f4e4524 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -407,8 +407,7 @@ hinic3_refill_indir_rqid(struct hinic3_rxq *rxq) /* Build indir tbl according to the number of rss queue. */ hinic3_fill_indir_tbl(nic_dev, indir_tbl); - err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl, - HINIC3_RSS_INDIR_SIZE); + err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl); if (err) { PMD_DRV_LOG(ERR, "Set indirect table failed, eth_dev:%s, queue_idx:%d", diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c new file mode 100644 index 0000000000..5c94a8b683 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.c @@ -0,0 +1,163 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_htn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + ctxt_block->cmdq_hdr.dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return (uint8_t)HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id, uint16_t func_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->dest_func_id = func_id; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t func_id; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + func_id = hinic3_global_func_id(nic_dev->hwdev); + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid, func_id); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], + start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return (uint8_t)HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->dest_func_id = func_id; + vlan_ctx->start_qid = q_id; + vlan_ctx->vlan_tag = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return (uint8_t)HINIC3_HTN_CMD_SVLAN_MODIFY; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf + sizeof(struct hinic3_rss_cmd_header); + cmd_buf->size = sizeof(struct hinic3_rss_cmd_header) + HINIC3_RSS_INDIR_SIZE; + memset(indir_tbl, 0, HINIC3_RSS_INDIR_SIZE); + + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) { + indir_tbl[i] = (uint8_t)(*(indir_table + i)); + } + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(indir_tbl, HINIC3_RSS_INDIR_SIZE); + + return (uint8_t)HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE; +} + +static void prepare_rss_indir_table_cmd_header(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + struct hinic3_rss_cmd_header *header = cmd_buf->buf; + + header->dest_func_id = hinic3_global_func_id(nic_dev->hwdev); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(header, sizeof(*header)); +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + memset(cmd_buf->buf, 0, cmd_buf->size); + prepare_rss_indir_table_cmd_header(nic_dev, cmd_buf); + + return (uint8_t)HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint8_t *indir_tbl = NULL; + + indir_tbl = (uint8_t *)cmd_buf->buf; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_be32_to_cpu(cmd_buf->buf, HINIC3_RSS_INDIR_SIZE); + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) { + indir_table[i] = *(indir_tbl + i); + } +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_htn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h new file mode 100644 index 0000000000..d4d5a733df --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/hinic3_htn_cmdq.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_HTN_CMDQ_H_ +#define _HINIC3_HTN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + u32 rsvd[2]; + u16 num_queues; + u16 queue_type; + u16 start_qid; + u16 dest_func_id; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_rss_cmd_header { + u32 rsv[3]; + u16 rsv1; + u16 dest_func_id; +}; + +/* NIC HTN CMD */ +enum hinic3_htn_cmd { + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_ST = 0x20, + HINIC3_HTN_CMD_SQ_RQ_CONTEXT_MULTI_LD, + HINIC3_HTN_CMD_TSO_LRO_SPACE_CLEAN, + HINIC3_HTN_CMD_SVLAN_MODIFY, + HINIC3_HTN_CMD_SET_RSS_INDIR_TABLE, + HINIC3_HTN_CMD_GET_RSS_INDIR_TABLE +}; + +struct hinic3_vlan_ctx { + u32 rsv[2]; + u16 vlan_tag; + u8 vlan_sel; + u8 vlan_mode; + u16 start_qid; + u16 dest_func_id; +}; + +#endif /* _HINIC3_HTN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/htn_adapt/meson.build b/drivers/net/hinic3/htn_adapt/meson.build new file mode 100644 index 0000000000..17f7ad09e3 --- /dev/null +++ b/drivers/net/hinic3/htn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_htn_cmdq.c', +) diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c new file mode 100644 index 0000000000..cf50b06beb --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.c @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#include "hinic3_compat.h" +#include "hinic3_nic_cfg.h" +#include "hinic3_cmd.h" +#include "hinic3_hwif.h" +#include "hinic3_stn_cmdq.h" + +static uint8_t prepare_cmd_buf_clean_tso_lro_space(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type) +{ + struct hinic3_clean_queue_ctxt *ctxt_block = NULL; + + ctxt_block = cmd_buf->buf; + ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; + ctxt_block->cmdq_hdr.queue_type = ctxt_type; + ctxt_block->cmdq_hdr.start_qid = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); + + cmd_buf->size = sizeof(*ctxt_block); + return (uint8_t)HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT; +} + +static void qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, + enum hinic3_qp_ctxt_type ctxt_type, uint16_t num_queues, + uint16_t q_id) +{ + qp_ctxt_hdr->queue_type = ctxt_type; + qp_ctxt_hdr->num_queues = num_queues; + qp_ctxt_hdr->start_qid = q_id; + qp_ctxt_hdr->rsvd = 0; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); +} + +static uint8_t prepare_cmd_buf_qp_context_multi_store(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf, + enum hinic3_qp_ctxt_type ctxt_type, + uint16_t start_qid, uint16_t max_ctxts) +{ + struct hinic3_qp_ctxt_block *qp_ctxt_block = NULL; + uint16_t i; + + qp_ctxt_block = cmd_buf->buf; + + qp_prepare_cmdq_header(&qp_ctxt_block->cmdq_hdr, ctxt_type, + max_ctxts, start_qid); + + for (i = 0; i < max_ctxts; i++) { + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + hinic3_rq_prepare_ctxt(nic_dev->rxqs[start_qid + i], + &qp_ctxt_block->rq_ctxt[i]); + else + hinic3_sq_prepare_ctxt(nic_dev->txqs[start_qid + i], start_qid + i, + &qp_ctxt_block->sq_ctxt[i]); + } + + if (ctxt_type == HINIC3_QP_CTXT_TYPE_RQ) + cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + else + cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + + return (uint8_t)HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX; +} + +static uint8_t prepare_cmd_buf_modify_svlan(struct hinic3_cmd_buf *cmd_buf, + uint16_t func_id, uint16_t vlan_tag, uint16_t q_id, uint8_t vlan_mode) +{ + struct hinic3_vlan_ctx *vlan_ctx = NULL; + + cmd_buf->size = sizeof(struct hinic3_vlan_ctx); + vlan_ctx = (struct hinic3_vlan_ctx *)cmd_buf->buf; + + vlan_ctx->func_id = func_id; + vlan_ctx->qid = q_id; + vlan_ctx->vlan_id = vlan_tag; + vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */ + vlan_ctx->vlan_mode = vlan_mode; + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + hinic3_cpu_to_be32(vlan_ctx, sizeof(struct hinic3_vlan_ctx)); + return (uint8_t)HINIC3_UCODE_CMD_MODIFY_VLAN_CTX; +} + +static uint8_t prepare_cmd_buf_set_rss_indir_table(struct hinic3_nic_dev *nic_dev, + const uint32_t *indir_table, + struct hinic3_cmd_buf *cmd_buf) +{ + uint32_t i, size; + uint32_t *temp = NULL; + struct nic_rss_indirect_tbl *indir_tbl = NULL; + + indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf; + cmd_buf->size = sizeof(struct nic_rss_indirect_tbl); + memset(indir_tbl, 0, sizeof(*indir_tbl)); + + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) { + indir_tbl->entry[i] = (uint16_t)(*(indir_table + i)); + } + size = sizeof(indir_tbl->entry) / sizeof(uint32_t); + temp = (uint32_t *)indir_tbl->entry; + for (i = 0; i < size; i++) { + rte_atomic_thread_fence(rte_memory_order_seq_cst); + temp[i] = cpu_to_be32(temp[i]); + } + return (uint8_t)HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE; +} + +static uint8_t prepare_cmd_buf_get_rss_indir_table(struct hinic3_nic_dev *nic_dev, + struct hinic3_cmd_buf *cmd_buf) +{ + (void)nic_dev; + memset(cmd_buf->buf, 0, cmd_buf->size); + + return (uint8_t)HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE; +} + +static void cmd_buf_to_rss_indir_table(const struct hinic3_cmd_buf *cmd_buf, uint32_t *indir_table) +{ + uint32_t i; + uint16_t *indir_tbl = NULL; + + indir_tbl = (uint16_t *)cmd_buf->buf; + for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++) { + indir_table[i] = *(indir_tbl + i); + } +} + +struct hinic3_nic_cmdq_ops *hinic3_nic_cmdq_get_stn_ops(void) +{ + static struct hinic3_nic_cmdq_ops cmdq_ops = { + .prepare_cmd_buf_clean_tso_lro_space = prepare_cmd_buf_clean_tso_lro_space, + .prepare_cmd_buf_qp_context_multi_store = prepare_cmd_buf_qp_context_multi_store, + .prepare_cmd_buf_modify_svlan = prepare_cmd_buf_modify_svlan, + .prepare_cmd_buf_set_rss_indir_table = prepare_cmd_buf_set_rss_indir_table, + .prepare_cmd_buf_get_rss_indir_table = prepare_cmd_buf_get_rss_indir_table, + .cmd_buf_to_rss_indir_table = cmd_buf_to_rss_indir_table, + }; + + return &cmdq_ops; +} diff --git a/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h new file mode 100644 index 0000000000..f8d26e9397 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/hinic3_stn_cmdq.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2026 Huawei Technologies Co., Ltd + */ + +#ifndef _HINIC3_STN_CMDQ_H_ +#define _HINIC3_STN_CMDQ_H_ + +#include "hinic3_nic_io.h" + +struct hinic3_qp_ctxt_header { + uint16_t num_queues; + uint16_t queue_type; + uint16_t start_qid; + uint16_t rsvd; +}; + +struct hinic3_clean_queue_ctxt { + struct hinic3_qp_ctxt_header cmdq_hdr; + uint32_t rsvd; +}; + +struct hinic3_qp_ctxt_block { + struct hinic3_qp_ctxt_header cmdq_hdr; + union { + struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; + struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; + }; +}; + +struct hinic3_vlan_ctx { + uint32_t func_id; + uint32_t qid; /* if qid = 0xFFFF, config for all queues */ + uint32_t vlan_id; + uint32_t vlan_mode; + uint32_t vlan_sel; +}; + +#endif /* _HINIC3_STN_CMDQ_H_ */ diff --git a/drivers/net/hinic3/stn_adapt/meson.build b/drivers/net/hinic3/stn_adapt/meson.build new file mode 100644 index 0000000000..99f7f66ab4 --- /dev/null +++ b/drivers/net/hinic3/stn_adapt/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2026 Huawei Technologies Co., Ltd + +includes += include_directories('.') +sources += files( + 'hinic3_stn_cmdq.c', +) -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 4/7] net/hinic3: add fun init ops to support Compact CQE 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (2 preceding siblings ...) 2026-01-31 10:05 ` [PATCH 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang @ 2026-01-31 10:06 ` Feifei Wang 2026-01-31 10:06 ` [PATCH 5/7] net/hinic3: add rx " Feifei Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:06 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, use compact CQE to achieve better performance. In this mode, CQE is uploaded together with packet. When doing fun init, replace CQE's dma memory mapping with CI index, hinic3 driver will loop CI to check if packet arrive. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.c | 214 +++++++++--- drivers/net/hinic3/hinic3_ethdev.h | 131 +++++--- drivers/net/hinic3/hinic3_nic_io.c | 507 +++++++++++++---------------- drivers/net/hinic3/hinic3_nic_io.h | 25 ++ 4 files changed, 496 insertions(+), 381 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c index ecdfcd5654..f2dd414ce7 100644 --- a/drivers/net/hinic3/hinic3_ethdev.c +++ b/drivers/net/hinic3/hinic3_ethdev.c @@ -32,7 +32,7 @@ #define HINIC3_DEFAULT_RX_FREE_THRESH 32u #define HINIC3_DEFAULT_TX_FREE_THRESH 32u -#define HINIC3_RX_WAIT_CYCLE_THRESH 500 +#define HINIC3_RX_WAIT_CYCLE_THRESH 150 /** * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID. @@ -431,8 +431,10 @@ hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) { + struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev*)hwdev->dev_handle; uint8_t default_cos = 0; uint8_t valid_cos_bitmap; + uint8_t cos_num_max; uint8_t i; valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap; @@ -441,7 +443,12 @@ hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return -EFAULT; } - for (i = 0; i < HINIC3_COS_NUM_MAX; i++) { + if (nic_dev->feature_cap & NIC_F_HTN_CMDQ) + cos_num_max = HINIC3_COS_NUM_MAX; + else + cos_num_max = HINIC3_COS_NUM_MAX_HTN; + + for (i = 0; i < cos_num_max; i++) { if (valid_cos_bitmap & RTE_BIT32(i)) /* Find max cos id as default cos. */ default_cos = i; @@ -632,6 +639,26 @@ hinic3_dev_configure(struct rte_eth_dev *dev) nic_dev->num_sqs = dev->data->nb_tx_queues; nic_dev->num_rqs = dev->data->nb_rx_queues; + + if (nic_dev->num_sqs > nic_dev->max_sqs || + nic_dev->num_rqs > nic_dev->max_rqs) { + PMD_DRV_LOG(ERR, "num_sqs: %d or num_rqs: %d larger than max_sqs: %d or max_rqs: %d", + nic_dev->num_sqs, nic_dev->num_rqs, + nic_dev->max_sqs, nic_dev->max_rqs); + return -EINVAL; + } + + /* The range of mtu is 384~9600 */ + if (HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) < HINIC3_MIN_FRAME_SIZE || + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) > + HINIC3_MAX_JUMBO_FRAME_SIZE) { + PMD_DRV_LOG(ERR, "Max rx pkt len out of range, max_rx_pkt_len: %d, " + "expect between %d and %d", + HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode), + HINIC3_MIN_FRAME_SIZE, HINIC3_MAX_JUMBO_FRAME_SIZE); + return -EINVAL; + } + nic_dev->mtu_size = (uint16_t)HINIC3_PKTLEN_TO_MTU(HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode)); if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) @@ -644,6 +671,16 @@ hinic3_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +hinic3_dev_tnl_tso_support(struct rte_eth_dev_info *info, struct hinic3_nic_dev *nic_dev) { + if (HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) { + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; + } + if (HINIC3_SUPPORT_IPXIP_OFFLOAD(nic_dev)) { + info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; + } +} + /** * Get information about the device. * @@ -685,6 +722,8 @@ hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + hinic3_dev_tnl_tso_support(info, nic_dev); + info->hash_key_size = HINIC3_RSS_KEY_SIZE; info->reta_size = HINIC3_RSS_INDIR_SIZE; info->flow_type_rss_offloads = HINIC3_RSS_OFFLOAD_ALL; @@ -926,16 +965,25 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, struct hinic3_rxq *rxq = NULL; const struct rte_memzone *rq_mz = NULL; const struct rte_memzone *cqe_mz = NULL; + const struct rte_memzone *ci_mz = NULL; const struct rte_memzone *pi_mz = NULL; uint16_t rq_depth, rx_free_thresh; uint32_t queue_buf_size; void *db_addr = NULL; int wqe_count; uint32_t buf_size; + uint32_t rx_buf_size; int err; nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + /* Queue depth must be equal to queue 0 */ + if (qid != 0 && (nb_desc != nic_dev->rxqs[0]->q_depth)) { + PMD_DRV_LOG(WARNING, "rxq%u depth:%u is not equal to queue0 depth:%u.\n", + qid, nb_desc, nic_dev->rxqs[0]->q_depth); + nb_desc = nic_dev->rxqs[0]->q_depth; + } + /* Queue depth must be power of 2, otherwise will be aligned up. */ rq_depth = (nb_desc & (nb_desc - 1)) ? ((uint16_t)(1U << (rte_log2_u32(nb_desc) + 1))) : nb_desc; @@ -988,17 +1036,19 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, nic_dev->rxqs[qid] = rxq; rxq->mb_pool = mp; rxq->q_id = qid; + rxq->next_to_update = 0; rxq->q_depth = rq_depth; rxq->q_mask = rq_depth - 1; rxq->delta = rq_depth; + rxq->cons_idx = 0; + rxq->prod_idx = 0; rxq->rx_free_thresh = rx_free_thresh; rxq->rxinfo_align_end = rxq->q_depth - rxq->rx_free_thresh; rxq->port_id = dev->data->port_id; rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH; rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* If buf_len used for function table, need to translated. */ - uint16_t rx_buf_size = - rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; + rx_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size); if (err) { PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s", @@ -1006,11 +1056,17 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto adjust_bufsize_fail; } - if (buf_size >= HINIC3_RX_BUF_SIZE_4K && - buf_size < HINIC3_RX_BUF_SIZE_16K) - rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; - else - rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + /* If NIC support compact CQE, use compact wqe as default. */ + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + + rxq->wqe_type = HINIC3_COMPACT_RQ_WQE; + } else { + if (buf_size >= HINIC3_RX_BUF_SIZE_4K && + buf_size < HINIC3_RX_BUF_SIZE_16K) + rxq->wqe_type = HINIC3_EXTEND_RQ_WQE; + else + rxq->wqe_type = HINIC3_NORMAL_RQ_WQE; + } rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type; rxq->wqebb_size = (uint16_t)RTE_BIT32(rxq->wqebb_shift); @@ -1062,36 +1118,52 @@ hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_rx_info_fail; } - cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, - rq_depth * sizeof(*rxq->rx_cqe), - RTE_CACHE_LINE_SIZE, socket_id); - if (!cqe_mz) { - PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", - dev->data->name); - err = -ENOMEM; - goto alloc_cqe_mz_fail; - } - memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); - rxq->cqe_mz = cqe_mz; - rxq->cqe_start_paddr = cqe_mz->iova; - rxq->cqe_start_vaddr = cqe_mz->addr; - rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; - - wqe_count = hinic3_rx_fill_wqe(rxq); - if (wqe_count != rq_depth) { - PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", - wqe_count, dev->data->name); - err = -ENOMEM; - goto fill_rx_wqe_fail; + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_ci_mz", qid, + sizeof(*rxq->rq_ci), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (!ci_mz) { + PMD_DRV_LOG(ERR, "Allocate ci mem zone failed, dev_name: %s", dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(ci_mz); + goto alloc_cqe_ci_mz_fail; + } + + memset(ci_mz->addr, 0, sizeof(*rxq->rq_ci)); + rxq->ci_mz = ci_mz; + rxq->rq_ci = (struct hinic3_rq_ci_wb *)ci_mz->addr; + rxq->rq_ci_paddr = ci_mz->iova; + } else { + cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid, + rq_depth * sizeof(*rxq->rx_cqe), + RTE_CACHE_LINE_SIZE, socket_id); + if (!cqe_mz) { + PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s", + dev->data->name); + err = -ENOMEM; + goto alloc_cqe_ci_mz_fail; + } + memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe)); + rxq->cqe_mz = cqe_mz; + rxq->cqe_start_paddr = cqe_mz->iova; + rxq->cqe_start_vaddr = cqe_mz->addr; + rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr; + + wqe_count = hinic3_rx_fill_wqe(rxq); + if (wqe_count != rq_depth) { + PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s", + wqe_count, dev->data->name); + err = -ENOMEM; + hinic3_memzone_free(cqe_mz); + goto alloc_cqe_ci_mz_fail; + } } - /* Record rxq pointer in rte_eth rx_queues. */ dev->data->rx_queues[qid] = rxq; return 0; -fill_rx_wqe_fail: - hinic3_memzone_free(rxq->cqe_mz); -alloc_cqe_mz_fail: +alloc_cqe_ci_mz_fail: rte_free(rxq->rx_info); alloc_rx_info_fail: @@ -1193,12 +1265,15 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, txq->q_id = qid; txq->q_depth = sq_depth; txq->q_mask = sq_depth - 1; + txq->cons_idx = 0; + txq->prod_idx = 0; txq->wqebb_shift = HINIC3_SQ_WQEBB_SHIFT; txq->wqebb_size = (uint16_t)RTE_BIT32(txq->wqebb_shift); txq->tx_free_thresh = tx_free_thresh; txq->owner = 1; txq->cos = nic_dev->default_cos; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->tx_wqe_compact_task = HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev); ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid, HINIC3_CI_Q_ADDR_SIZE, @@ -1246,7 +1321,6 @@ hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc, goto alloc_tx_info_fail; } - /* Record txq pointer in rte_eth tx_queues. */ dev->data->tx_queues[qid] = txq; return 0; @@ -1274,7 +1348,10 @@ hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) hinic3_free_rxq_mbufs(rxq); - hinic3_memzone_free(rxq->cqe_mz); + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) + hinic3_memzone_free(rxq->ci_mz); + else + hinic3_memzone_free(rxq->cqe_mz); rte_free(rxq->rx_info); rxq->rx_info = NULL; @@ -1323,24 +1400,31 @@ hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id) static int hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; + rxq = dev->data->rx_queues[rq_id]; + rc = hinic3_start_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, - "Start rx queue failed, eth_dev:%s, queue_idx:%d", - dev->data->name, rq_id); + "Start rx queue failed, eth_dev:%s, queue_idx:%d", + dev->data->name, rq_id); return rc; } - dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); - if (rc) { - PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", - rq_id); - return rc; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, true); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.", + rq_id); + return rc; + } } + + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1358,21 +1442,24 @@ hinic3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rq_id) static int hinic3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rq_id) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_rxq *rxq = dev->data->rx_queues[rq_id]; int rc; - rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + rc = hinic3_stop_rq(dev, rxq); if (rc) { PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); return rc; } - rc = hinic3_stop_rq(dev, rxq); - if (rc) { - PMD_DRV_LOG(ERR, - "Stop rx queue failed, eth_dev:%s, queue_idx:%d", - dev->data->name, rq_id); - return rc; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) { + rc = hinic3_enable_rxq_fdir_filter(dev, rq_id, false); + if (rc) { + PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.", rq_id); + return rc; + } } + dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; @@ -1388,6 +1475,7 @@ hinic3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t sq_id) HINIC3_SET_TXQ_STARTED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; } @@ -1404,6 +1492,7 @@ hinic3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t sq_id) dev->data->name, sq_id); return rc; } + HINIC3_SET_TXQ_STOPPED(txq); dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -3290,6 +3379,24 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { .flow_ops_get = hinic3_dev_filter_ctrl, }; +static void hinic3_nic_tx_rx_ops_init(struct hinic3_nic_dev *nic_dev) +{ + if (HINIC3_SUPPORT_TX_WQE_COMPACT_TASK(nic_dev)) + nic_dev->tx_rx_ops.nic_tx_set_wqe_offload = hinic3_tx_set_compact_task_offload; + else + nic_dev->tx_rx_ops.nic_tx_set_wqe_offload = hinic3_tx_set_normal_task_offload; + + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + nic_dev->tx_rx_ops.nic_rx_get_cqe_info = hinic3_rx_get_compact_cqe_info; + nic_dev->tx_rx_ops.nic_rx_cqe_done = rx_integrated_cqe_done; + nic_dev->tx_rx_ops.nic_rx_poll_rq_empty = hinic3_poll_integrated_cqe_rq_empty; + } else { + nic_dev->tx_rx_ops.nic_rx_get_cqe_info = hinic3_rx_get_cqe_info; + nic_dev->tx_rx_ops.nic_rx_cqe_done = rx_separate_cqe_done; + nic_dev->tx_rx_ops.nic_rx_poll_rq_empty = hinic3_poll_rq_empty; + } +} + /** * Initialize the network function, including hardware configuration, memory * allocation for data structures, MAC address setup, and interrupt enabling. @@ -3303,6 +3410,7 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = { * 0 on success, non-zero on failure. */ static int + hinic3_func_init(struct rte_eth_dev *eth_dev) { struct hinic3_tcam_info *tcam_info = NULL; @@ -3391,9 +3499,9 @@ hinic3_func_init(struct rte_eth_dev *eth_dev) } if (!(nic_dev->feature_cap & NIC_F_HTN_CMDQ)) - nic_dev->cmdq_ops = hinic3_cmdq_get_stn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_stn_ops(); else - nic_dev->cmdq_ops = hinic3_cmdq_get_htn_ops(); + nic_dev->cmdq_ops = hinic3_nic_cmdq_get_htn_ops(); err = hinic3_init_sw_rxtxqs(nic_dev); if (err) { diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 4a5dbb0844..896f015341 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -10,48 +10,46 @@ #include "hinic3_fdir.h" -#define HINIC3_PMD_DRV_VERSION "B106" - #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle) -#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD -#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN -#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD -#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD -#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG -#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM -#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM -#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM -#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN -#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK -#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM -#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 -#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 -#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN -#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED -#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH -#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK -#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN -#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM -#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 -#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO -#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM +#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD +#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN +#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD +#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD +#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG +#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM +#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM +#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM +#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN +#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK +#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM +#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6 +#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4 +#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN +#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED +#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH +#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK +#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN +#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM +#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6 +#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO +#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM #define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool" /* Mbuf pool for copy invalid mbuf segs. */ -#define HINIC3_COPY_MEMPOOL_DEPTH 1024 -#define HINIC3_COPY_MEMPOOL_CACHE 128 -#define HINIC3_COPY_MBUF_SIZE 4096 +#define HINIC3_COPY_MEMPOOL_DEPTH 1024 +#define HINIC3_COPY_MEMPOOL_CACHE 128 +#define HINIC3_COPY_MBUF_SIZE 4096 -#define HINIC3_DEV_NAME_LEN 32 -#define DEV_STOP_DELAY_MS 100 -#define DEV_START_DELAY_MS 100 -#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 +#define HINIC3_DEV_NAME_LEN 32 +#define DEV_STOP_DELAY_MS 100 +#define DEV_START_DELAY_MS 100 +#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000 -#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) -#define HINIC3_MAX_QUEUE_NUM 64 +#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE) +#define HINIC3_MAX_QUEUE_NUM 256 #define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \ ((struct hinic3_nic_dev *)(dev)->data->dev_private) @@ -68,27 +66,58 @@ enum hinic3_tx_cvlan_type { }; enum nic_feature_cap { - NIC_F_CSUM = RTE_BIT32(0), - NIC_F_SCTP_CRC = RTE_BIT32(1), - NIC_F_TSO = RTE_BIT32(2), - NIC_F_LRO = RTE_BIT32(3), - NIC_F_UFO = RTE_BIT32(4), - NIC_F_RSS = RTE_BIT32(5), - NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), - NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), - NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), - NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), - NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), - NIC_F_FDIR = RTE_BIT32(11), - NIC_F_PROMISC = RTE_BIT32(12), - NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_CSUM = RTE_BIT32(0), + NIC_F_SCTP_CRC = RTE_BIT32(1), + NIC_F_TSO = RTE_BIT32(2), + NIC_F_LRO = RTE_BIT32(3), + NIC_F_UFO = RTE_BIT32(4), + NIC_F_RSS = RTE_BIT32(5), + NIC_F_RX_VLAN_FILTER = RTE_BIT32(6), + NIC_F_RX_VLAN_STRIP = RTE_BIT32(7), + NIC_F_TX_VLAN_INSERT = RTE_BIT32(8), + NIC_F_VXLAN_OFFLOAD = RTE_BIT32(9), + NIC_F_IPSEC_OFFLOAD = RTE_BIT32(10), + NIC_F_FDIR = RTE_BIT32(11), + NIC_F_PROMISC = RTE_BIT32(12), + NIC_F_ALLMULTI = RTE_BIT32(13), + NIC_F_PTP_1588_V2 = RTE_BIT32(18), + NIC_F_TX_WQE_COMPACT_TASK = RTE_BIT32(19), + NIC_F_RX_HW_COMPACT_CQE = RTE_BIT32(20), + NIC_F_HTN_CMDQ = RTE_BIT32(21), + NIC_F_GENEVE_OFFLOAD = RTE_BIT32(22), + NIC_F_IPXIP_OFFLOAD = RTE_BIT32(23), + NIC_F_TC_FLOWER_OFFLOAD = RTE_BIT32(24), + NIC_F_HTN_FDIR = RTE_BIT32(25), + NIC_F_SQ_RQ_CI_COALESCE = RTE_BIT32(26), + NIC_F_RX_SW_COMPACT_CQE = RTE_BIT32(27), + }; -#define DEFAULT_DRV_FEATURE 0x3FFF +#define DEFAULT_DRV_FEATURE 0x3FC3FFF TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); +/* Tx WQE offload set callback function */ +typedef void (*nic_tx_set_wqe_offload_t)(void *wqe_info, void *wqe_combo); + +/* Rx CQE info get callback function */ +typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rx_queue, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/* Rx CQE check status callback funcion */ +typedef bool (*nic_rx_cqe_done_t)(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/* Rx CQE empty poll callback function */ +typedef int (*nic_rx_poll_rq_empty_t)(struct hinic3_rxq *rxq); + +struct hinic3_nic_tx_rx_ops { + nic_tx_set_wqe_offload_t nic_tx_set_wqe_offload; + nic_rx_get_cqe_info_t nic_rx_get_cqe_info; + nic_rx_cqe_done_t nic_rx_cqe_done; + nic_rx_poll_rq_empty_t nic_rx_poll_rq_empty; +}; + struct hinic3_nic_dev { struct hinic3_hwdev *hwdev; /**< Hardware device. */ struct hinic3_txq **txqs; @@ -133,6 +162,8 @@ struct hinic3_nic_dev { struct hinic3_tcam_info tcam; struct hinic3_ethertype_filter_list filter_ethertype_list; struct hinic3_fdir_rule_filter_list filter_fdir_rule_list; + struct hinic3_nic_cmdq_ops *cmdq_ops; + struct hinic3_nic_tx_rx_ops tx_rx_ops; }; extern const struct rte_flow_ops hinic3_flow_ops; diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c index 7f2972f1d1..26e832589b 100644 --- a/drivers/net/hinic3/hinic3_nic_io.c +++ b/drivers/net/hinic3/hinic3_nic_io.c @@ -11,297 +11,192 @@ #include "hinic3_rx.h" #include "hinic3_tx.h" -#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 -#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 -#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF -#define HINIC3_DEAULT_DROP_THD_OFF 0 - -#define WQ_PREFETCH_MAX 6 -#define WQ_PREFETCH_MIN 1 -#define WQ_PREFETCH_THRESHOLD 256 - -#define HINIC3_Q_CTXT_MAX \ - ((uint16_t)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64)) - -enum hinic3_qp_ctxt_type { - HINIC3_QP_CTXT_TYPE_SQ, - HINIC3_QP_CTXT_TYPE_RQ, -}; - -struct hinic3_qp_ctxt_header { - uint16_t num_queues; - uint16_t queue_type; - uint16_t start_qid; - uint16_t rsvd; -}; - -struct hinic3_sq_ctxt { - uint32_t ci_pi; - uint32_t drop_mode_sp; /**< Packet drop mode and special flags. */ - uint32_t wq_pfn_hi_owner; /**< High PFN and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd0; /**< Reserved field 0. */ - uint32_t pkt_drop_thd; /**< Packet drop threshold. */ - uint32_t global_sq_id; - uint32_t vlan_ceq_attr; /**< VLAN and CEQ attributes. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t rsvd8; /**< Reserved field 8. */ - uint32_t rsvd9; /**< Reserved field 9. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_rq_ctxt { - uint32_t ci_pi; - uint32_t ceq_attr; /**< Completion event queue attributes. */ - uint32_t wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */ - uint32_t wq_pfn_lo; /**< Low bits of work queue PFN. */ - - uint32_t rsvd[3]; /**< Reserved field. */ - uint32_t cqe_sge_len; /**< CQE scatter/gather element length. */ - - uint32_t pref_cache; /**< Cache prefetch settings for the queue. */ - uint32_t pref_ci_owner; /**< Prefetch settings for CI and ownership. */ - uint32_t pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */ - uint32_t pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */ - - uint32_t pi_paddr_hi; /**< High 32-bits of PI DMA address. */ - uint32_t pi_paddr_lo; /**< Low 32-bits of PI DMA address. */ - uint32_t wq_block_pfn_hi; /**< High bits of work queue block PFN. */ - uint32_t wq_block_pfn_lo; /**< Low bits of work queue block PFN. */ -}; - -struct hinic3_sq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_rq_ctxt_block { - struct hinic3_qp_ctxt_header cmdq_hdr; - struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX]; -}; - -struct hinic3_clean_queue_ctxt { - struct hinic3_qp_ctxt_header cmdq_hdr; - uint32_t rsvd; -}; - -#define SQ_CTXT_SIZE(num_sqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_sqs) * sizeof(struct hinic3_sq_ctxt))) - -#define RQ_CTXT_SIZE(num_rqs) \ - ((uint16_t)(sizeof(struct hinic3_qp_ctxt_header) + \ - (num_rqs) * sizeof(struct hinic3_rq_ctxt))) - -#define CI_IDX_HIGH_SHIFH 12 +#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3 +#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16 +#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF +#define HINIC3_DEAULT_DROP_THD_OFF 0 + +#define WQ_PREFETCH_MAX 6 +#define WQ_PREFETCH_MIN 1 +#define WQ_PREFETCH_THRESHOLD 256 + +#define CI_IDX_HIGH_SHIFH 12 #define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH) -#define SQ_CTXT_PI_IDX_SHIFT 0 -#define SQ_CTXT_CI_IDX_SHIFT 16 +#define SQ_CTXT_PI_IDX_SHIFT 0 +#define SQ_CTXT_CI_IDX_SHIFT 16 -#define SQ_CTXT_PI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_IDX_MASK 0xFFFFU +#define SQ_CTXT_PI_IDX_MASK 0xFFFFU +#define SQ_CTXT_CI_IDX_MASK 0xFFFFU -#define SQ_CTXT_CI_PI_SET(val, member) \ +#define SQ_CTXT_CI_PI_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 -#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 +#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0 +#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1 -#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U -#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U +#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U +#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U -#define SQ_CTXT_MODE_SET(val, member) \ - (((val) & SQ_CTXT_MODE_##member##_MASK) \ +#define SQ_CTXT_MODE_SET(val, member) \ + (((val) & SQ_CTXT_MODE_##member##_MASK) \ << SQ_CTXT_MODE_##member##_SHIFT) -#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 +#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23 -#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define SQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define SQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \ << SQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 -#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 +#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0 +#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16 -#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU +#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU -#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ +#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \ (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \ << SQ_CTXT_PKT_DROP_##member##_SHIFT) -#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 +#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0 -#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU +#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU #define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT) -#define SQ_CTXT_VLAN_TAG_SHIFT 0 -#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 -#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 -#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 +#define SQ_CTXT_VLAN_TAG_SHIFT 0 +#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16 +#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19 +#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23 -#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU -#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U -#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U -#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U +#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU +#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U +#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U +#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U -#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ - (((val) & SQ_CTXT_VLAN_##member##_MASK) \ +#define SQ_CTXT_VLAN_CEQ_SET(val, member) \ + (((val) & SQ_CTXT_VLAN_##member##_MASK) \ << SQ_CTXT_VLAN_##member##_SHIFT) -#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define SQ_CTXT_PREF_CI_HI_SHIFT 0 -#define SQ_CTXT_PREF_OWNER_SHIFT 4 +#define SQ_CTXT_PREF_CI_HI_SHIFT 0 +#define SQ_CTXT_PREF_OWNER_SHIFT 4 -#define SQ_CTXT_PREF_CI_HI_MASK 0xFU -#define SQ_CTXT_PREF_OWNER_MASK 0x1U +#define SQ_CTXT_PREF_CI_HI_MASK 0xFU +#define SQ_CTXT_PREF_OWNER_MASK 0x1U -#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define SQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define SQ_CTXT_PREF_SET(val, member) \ - (((val) & SQ_CTXT_PREF_##member##_MASK) \ +#define SQ_CTXT_PREF_SET(val, member) \ + (((val) & SQ_CTXT_PREF_##member##_MASK) \ << SQ_CTXT_PREF_##member##_SHIFT) -#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define SQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT) -#define RQ_CTXT_PI_IDX_SHIFT 0 -#define RQ_CTXT_CI_IDX_SHIFT 16 +#define RQ_CTXT_PI_IDX_SHIFT 0 +#define RQ_CTXT_CI_IDX_SHIFT 16 -#define RQ_CTXT_PI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_IDX_MASK 0xFFFFU +#define RQ_CTXT_PI_IDX_MASK 0xFFFFU +#define RQ_CTXT_CI_IDX_MASK 0xFFFFU -#define RQ_CTXT_CI_PI_SET(val, member) \ +#define RQ_CTXT_CI_PI_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 -#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 +#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21 +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30 +#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31 -#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU -#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU +#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U +#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U -#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ - (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ +#define RQ_CTXT_CEQ_ATTR_SET(val, member) \ + (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT) -#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 -#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 +#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0 +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28 +#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31 -#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU -#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U -#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U +#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU +#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U +#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U -#define RQ_CTXT_WQ_PAGE_SET(val, member) \ - (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ +#define RQ_CTXT_WQ_PAGE_SET(val, member) \ + (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \ << RQ_CTXT_WQ_PAGE_##member##_SHIFT) -#define RQ_CTXT_CQE_LEN_SHIFT 28 +#define RQ_CTXT_CQE_LEN_SHIFT 28 -#define RQ_CTXT_CQE_LEN_MASK 0x3U +#define RQ_CTXT_CQE_LEN_MASK 0x3U -#define RQ_CTXT_CQE_LEN_SET(val, member) \ +#define RQ_CTXT_CQE_LEN_SET(val, member) \ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT) -#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 -#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 -#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 +#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0 +#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14 +#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25 -#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU -#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU -#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU +#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU +#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU +#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU -#define RQ_CTXT_PREF_CI_HI_SHIFT 0 -#define RQ_CTXT_PREF_OWNER_SHIFT 4 +#define RQ_CTXT_PREF_CI_HI_SHIFT 0 +#define RQ_CTXT_PREF_OWNER_SHIFT 4 -#define RQ_CTXT_PREF_CI_HI_MASK 0xFU -#define RQ_CTXT_PREF_OWNER_MASK 0x1U +#define RQ_CTXT_PREF_CI_HI_MASK 0xFU +#define RQ_CTXT_PREF_OWNER_MASK 0x1U -#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 -#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 +#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0 +#define RQ_CTXT_PREF_CI_LOW_SHIFT 20 -#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU -#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU +#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU +#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU -#define RQ_CTXT_PREF_SET(val, member) \ - (((val) & RQ_CTXT_PREF_##member##_MASK) \ +#define RQ_CTXT_PREF_SET(val, member) \ + (((val) & RQ_CTXT_PREF_##member##_MASK) \ << RQ_CTXT_PREF_##member##_SHIFT) -#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 +#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0 -#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU +#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU -#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ - (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ +#define RQ_CTXT_WQ_BLOCK_SET(val, member) \ + (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT) #define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4) -#define WQ_PAGE_PFN_SHIFT 12 -#define WQ_BLOCK_PFN_SHIFT 9 +#define WQ_PAGE_PFN_SHIFT 12 +#define WQ_BLOCK_PFN_SHIFT 9 #define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT) #define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT) -/** - * Prepare the command queue header and converted it to big-endian format. - * - * @param[out] qp_ctxt_hdr - * Pointer to command queue context header structure to be initialized. - * @param[in] ctxt_type - * Type of context (SQ/RQ) to be set in header. - * @param[in] num_queues - * Number of queues. - * @param[in] q_id - * Starting queue ID for this context. - */ -static void -hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr, - enum hinic3_qp_ctxt_type ctxt_type, - uint16_t num_queues, uint16_t q_id) -{ - qp_ctxt_hdr->queue_type = ctxt_type; - qp_ctxt_hdr->num_queues = num_queues; - qp_ctxt_hdr->start_qid = q_id; - qp_ctxt_hdr->rsvd = 0; - - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr)); -} +#define CQE_CTX_CI_ADDR_SHIFT 4 /** * Initialize context structure for specified TXQ by configuring various queue @@ -401,7 +296,7 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) uint64_t wq_page_addr, wq_page_pfn, wq_block_pfn; uint32_t wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo; uint16_t pi_start, ci_start; - uint16_t wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT; + uint16_t wqe_type = rq->wqebb_shift; uint8_t intr_disable; /* RQ depth is in unit of 8 Bytes. */ @@ -446,6 +341,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE); rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN); break; + case HINIC3_COMPACT_RQ_WQE: + /* Use 8Byte WQE without SGE for CQE. */ + rq_ctxt->wq_pfn_hi_type_owner |= RQ_CTXT_WQ_PAGE_SET(3, WQE_TYPE); + break; default: PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type); } @@ -495,12 +394,10 @@ hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt) static int init_sq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL; - struct hinic3_sq_ctxt *sq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_txq *sq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -511,28 +408,15 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) q_id = 0; while (q_id < nic_dev->num_sqs) { - sq_ctxt_block = cmd_buf->buf; - sq_ctxt = sq_ctxt_block->sq_ctxt; - max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_sqs - q_id); - - hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_SQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - sq = nic_dev->txqs[curr_id]; - hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]); - } - - cmd_buf->size = SQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store( + nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_SQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set SQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -563,12 +447,10 @@ init_sq_ctxts(struct hinic3_nic_dev *nic_dev) static int init_rq_ctxts(struct hinic3_nic_dev *nic_dev) { - struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL; - struct hinic3_rq_ctxt *rq_ctxt = NULL; struct hinic3_cmd_buf *cmd_buf = NULL; - struct hinic3_rxq *rq = NULL; uint64_t out_param = 0; - uint16_t q_id, curr_id, max_ctxts, i; + uint16_t q_id, curr_id, max_ctxts; + uint8_t cmd; int err = 0; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -578,29 +460,16 @@ init_rq_ctxts(struct hinic3_nic_dev *nic_dev) } q_id = 0; - while (q_id < nic_dev->num_rqs) { - rq_ctxt_block = cmd_buf->buf; - rq_ctxt = rq_ctxt_block->rq_ctxt; - +while (q_id < nic_dev->num_rqs) { max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX ? HINIC3_Q_CTXT_MAX : (nic_dev->num_rqs - q_id); - - hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr, - HINIC3_QP_CTXT_TYPE_RQ, - max_ctxts, q_id); - - for (i = 0; i < max_ctxts; i++) { - curr_id = q_id + i; - rq = nic_dev->rxqs[curr_id]; - hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]); - } - - cmd_buf->size = RQ_CTXT_SIZE(max_ctxts); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_qp_context_multi_store( + nic_dev, cmd_buf, + HINIC3_QP_CTXT_TYPE_RQ, q_id, max_ctxts); rte_atomic_thread_fence(rte_memory_order_seq_cst); err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if (err || out_param != 0) { PMD_DRV_LOG(ERR, "Set RQ ctxts failed, err: %d, out_param: %" PRIu64, @@ -633,9 +502,9 @@ static int clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, enum hinic3_qp_ctxt_type ctxt_type) { - struct hinic3_clean_queue_ctxt *ctxt_block = NULL; struct hinic3_cmd_buf *cmd_buf; uint64_t out_param = 0; + uint8_t cmd; int err; cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev); @@ -644,26 +513,11 @@ clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev, return -ENOMEM; } - /* Construct related command request. */ - ctxt_block = cmd_buf->buf; - /* Assumed max_rqs must be equal to max_sqs. */ - ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs; - ctxt_block->cmdq_hdr.queue_type = ctxt_type; - ctxt_block->cmdq_hdr.start_qid = 0; - /* - * Add a memory barrier to ensure that instructions are not out of order - * due to compilation optimization. - */ - rte_atomic_thread_fence(rte_memory_order_seq_cst); - - hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block)); - - cmd_buf->size = sizeof(*ctxt_block); + cmd = nic_dev->cmdq_ops->prepare_cmd_buf_clean_tso_lro_space(nic_dev, cmd_buf, ctxt_type); /* Send a command to hardware to clean up queue offload context. */ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC, - HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT, - cmd_buf, &out_param, 0); + cmd, cmd_buf, &out_param, 0); if ((err) || (out_param)) { PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, err: %d, out_param: %" PRIu64, @@ -705,6 +559,65 @@ hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev) nic_dev->rx_buff_len = buf_size; } +#define HINIC3_RX_CQE_TIMER_LOOP 15 +#define HINIC3_RX_CQE_COALESCE_NUM 63 + +int +hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev) +{ + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rxq *rxq = NULL; + struct hinic3_rq_cqe_ctx cqe_ctx; + rte_iova_t rq_ci_paddr; + uint16_t out_size = sizeof(cqe_ctx); + uint16_t q_id = 0; + uint16_t cmd; + int err; + + if (!nic_dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + if (hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_CMDQ) + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX_HTN; + else { + cmd = HINIC3_NIC_CMD_SET_RQ_CI_CTX; + } + + memset(&cqe_ctx, 0, sizeof(cqe_ctx)); + + while (q_id < nic_dev->num_rqs) { + rxq = nic_dev->rxqs[q_id]; + if (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE) { + rq_ci_paddr = rxq->rq_ci_paddr >> CQE_CTX_CI_ADDR_SHIFT; + cqe_ctx.ci_addr_hi = upper_32_bits(rq_ci_paddr); + cqe_ctx.ci_addr_lo = lower_32_bits(rq_ci_paddr); + cqe_ctx.threshold_cqe_num = HINIC3_RX_CQE_COALESCE_NUM; + cqe_ctx.timer_loop = HINIC3_RX_CQE_TIMER_LOOP; + } else { + cqe_ctx.threshold_cqe_num = 0; + cqe_ctx.timer_loop = 0; + } + + cqe_ctx.cqe_type = (rxq->wqe_type == HINIC3_COMPACT_RQ_WQE); + cqe_ctx.msix_entry_idx = rxq->msix_entry_idx; + cqe_ctx.rq_id = q_id; + + err = l2nic_msg_to_mgmt_sync(hwdev, cmd, + &cqe_ctx, sizeof(cqe_ctx), + &cqe_ctx, &out_size); + if (err || !out_size || cqe_ctx.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed, + qid: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, err, cqe_ctx.msg_head.status, out_size); + return -EFAULT; + } + q_id++; + } + + return 0; +} int hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) { @@ -768,13 +681,51 @@ hinic3_init_qp_ctxts(struct hinic3_nic_dev *nic_dev) } } + if (HINIC3_SUPPORT_RX_HW_COMPACT_CQE(nic_dev)) { + /* Init Rxq CQE context. */ + err = hinic3_init_rq_cqe_ctxts(nic_dev); + if (err) { + PMD_DRV_LOG(ERR, "Set rq cqe context failed"); + goto set_cqe_ctx_fail; + } + } + return 0; +set_cqe_ctx_fail: set_cons_idx_table_err: hinic3_clean_root_ctxt(hwdev); return err; } +int +hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) +{ + struct hinic3_nic_dev *nic_dev = NULL; + struct hinic3_hwdev *hwdev = NULL; + struct hinic3_rq_enable msg; + uint16_t out_size = sizeof(msg); + int err; + + if (!dev) + return -EINVAL; + + hwdev = nic_dev->hwdev; + + memset(&msg, 0, sizeof(msg)); + msg.rq_enable = enable; + msg.rq_id = q_id; + err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RQ_ENABLE, + &msg, sizeof(msg), &msg, &out_size); + if (err || !out_size || msg.msg_head.status) { + PMD_DRV_LOG(ERR, "Set rq enable failed, qid: %u, enable: %d, err: %d, status: 0x%x, out_size: 0x%x", + q_id, enable, err, msg.msg_head.status, out_size); + return -EFAULT; + } + + return 0; +} + void hinic3_free_qp_ctxts(struct hinic3_hwdev *hwdev) { diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h index 697e781bd0..6f1eebe8ca 100644 --- a/drivers/net/hinic3/hinic3_nic_io.h +++ b/drivers/net/hinic3/hinic3_nic_io.h @@ -223,6 +223,31 @@ hinic3_write_db(void *db_addr, uint16_t q_id, int cos, uint8_t cflag, uint16_t p */ void hinic3_get_func_rx_buf_size(struct hinic3_nic_dev *nic_dev); +/** + * Initialize RQ integrated CQE context + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_init_rq_cqe_ctxts(struct hinic3_nic_dev *nic_dev); + +/** + * Set RQ disable or enable + * + * @param[in] nic_dev + * Pointer to ethernet device structure. + * @param[in] q_id + * Receive queue id. + * @param[in] enable + * 1: enable 0: disable + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable); + /** * Initialize qps contexts, set SQ ci attributes, arm all SQ. * -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 5/7] net/hinic3: add rx ops to support Compact CQE 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (3 preceding siblings ...) 2026-01-31 10:06 ` [PATCH 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang @ 2026-01-31 10:06 ` Feifei Wang 2026-01-31 10:06 ` [PATCH 6/7] net/hinic3: add tx " Feifei Wang 2026-01-31 10:06 ` [PATCH 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:06 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt receive path, use different func callback to seperate normal CQE process and Compact CQE process. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_rx.c | 232 ++++++++++++++++++++++++++------- drivers/net/hinic3/hinic3_rx.h | 147 +++++++++++++++++++++ 2 files changed, 329 insertions(+), 50 deletions(-) diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c index 3d5f4e4524..02c078de3d 100644 --- a/drivers/net/hinic3/hinic3_rx.c +++ b/drivers/net/hinic3/hinic3_rx.c @@ -219,11 +219,11 @@ hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq) while (free_wqebbs++ < rxq->q_depth) { ci = hinic3_get_rq_local_ci(rxq); - - rx_cqe = &rxq->rx_cqe[ci]; - - /* Clear done bit. */ - rx_cqe->status = 0; + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + rx_cqe = &rxq->rx_cqe[ci]; + /* Clear done bit. */ + rx_cqe->status = 0; + } rx_info = &rxq->rx_info[ci]; rte_pktmbuf_free(rx_info->mbuf); @@ -299,7 +299,7 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) for (i = 0; i < rearm_wqebbs; i++) { dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]); - /* Fill buffer address only. */ + /* Fill packet dma address into wqe. */ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) { rq_wqe->extend_wqe.buf_desc.sge.hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); @@ -307,11 +307,16 @@ hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq) hinic3_hw_be32(lower_32_bits(dma_addr)); rq_wqe->extend_wqe.buf_desc.sge.len = nic_dev->rx_buff_len; - } else { + } else if (rxq->wqe_type == HINIC3_NORMAL_RQ_WQE) { rq_wqe->normal_wqe.buf_hi_addr = hinic3_hw_be32(upper_32_bits(dma_addr)); rq_wqe->normal_wqe.buf_lo_addr = hinic3_hw_be32(lower_32_bits(dma_addr)); + } else { + rq_wqe->compact_wqe.buf_hi_addr = + hinic3_hw_be32(upper_32_bits(dma_addr)); + rq_wqe->compact_wqe.buf_lo_addr = + hinic3_hw_be32(lower_32_bits(dma_addr)); } rq_wqe = @@ -618,6 +623,31 @@ hinic3_poll_rq_empty(struct hinic3_rxq *rxq) return err; } +int +hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq) +{ + struct hinic3_rx_info *rx_info; + struct hinic3_rq_ci_wb rq_ci; + uint16_t sw_ci; + uint16_t hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(__atomic_load_n(&rxq->rq_ci->dw1.value, __ATOMIC_ACQUIRE)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + while (sw_ci != hw_ci) { + rx_info = &rxq->rx_info[sw_ci]; + rte_pktmbuf_free(rx_info->mbuf); + rx_info->mbuf = NULL; + + sw_ci++; + sw_ci &= rxq->q_mask; + hinic3_update_rq_local_ci(rxq, 1); + } + + return 0; +} + void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, uint32_t *cqe_hole_cnt, uint32_t *head_ci, uint32_t *head_done) @@ -701,14 +731,17 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) rte_spinlock_unlock(&nic_dev->queue_list_lock); /* Send flush rxq cmd to device. */ - err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id); + else + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, false); if (err) { PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d", nic_dev->dev_name, rxq->q_id); goto rq_flush_failed; } - err = hinic3_poll_rq_empty(rxq); + err = nic_dev->tx_rx_ops.nic_rx_poll_rq_empty(rxq); if (err) { hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt, &head_ci, &head_done); @@ -724,6 +757,7 @@ hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) return 0; poll_rq_failed: + hinic3_set_rq_enable(nic_dev, rxq->q_id, true); rq_flush_failed: rte_spinlock_lock(&nic_dev->queue_list_lock); set_indir_failed: @@ -746,14 +780,22 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id); if (nic_dev->rss_state == HINIC3_RSS_ENABLE) { - err = hinic3_refill_indir_rqid(rxq); - if (err) { - PMD_DRV_LOG(ERR, - "Refill rq to indirect table failed, eth_dev:%s, queue_idx:%d err:%d", - nic_dev->dev_name, rxq->q_id, err); - hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_FDIR) != 0) + err = hinic3_set_rq_enable(nic_dev, rxq->q_id, true); + if(err) { + PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d\n", + nic_dev->dev_name, rxq->q_id); + } else { + err = hinic3_refill_indir_rqid(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Refill rq to indirect table failed," + "eth_dev:%s, queue_idx:%d err:%d", + nic_dev->dev_name, rxq->q_id, err); + hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id); + } } } + hinic3_rearm_rxq_mbuf(rxq); if (rxq->nic_dev->num_rss == 1) { err = hinic3_set_vport_enable(nic_dev->hwdev, true); @@ -772,12 +814,9 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq) static inline uint64_t -hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) +hinic3_rx_vlan(uint8_t vlan_offload, uint16_t vlan_tag, uint16_t *vlan_tci) { - uint16_t vlan_tag; - - vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len); - if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) { + if (!vlan_offload || vlan_tag == 0) { *vlan_tci = 0; return 0; } @@ -788,16 +827,14 @@ hinic3_rx_vlan(uint32_t offload_type, uint32_t vlan_len, uint16_t *vlan_tci) } static inline uint64_t -hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq) +hinic3_rx_csum(uint16_t csum_err, struct hinic3_rxq *rxq) { struct hinic3_nic_dev *nic_dev = rxq->nic_dev; - uint32_t csum_err; uint64_t flags; if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD))) return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN; - csum_err = HINIC3_GET_RX_CSUM_ERR(status); if (likely(csum_err == 0)) return (HINIC3_PKT_RX_IP_CKSUM_GOOD | HINIC3_PKT_RX_L4_CKSUM_GOOD); @@ -931,18 +968,119 @@ hinic3_start_all_rqs(struct rte_eth_dev *eth_dev) return err; } +bool +rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_cqe *cqe = NULL; + uint16_t sw_ci; + uint32_t status; + + sw_ci = hinic3_get_rq_local_ci(rxq); + *rx_cqe = &rxq->rx_cqe[sw_ci]; + cqe = *rx_cqe; + + status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&cqe->status, + rte_memory_order_acquire))); + if (!HINIC3_GET_RX_DONE(status)) + return false; + + return true; +} + +bool +rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe) +{ + struct hinic3_rq_ci_wb rq_ci; + struct rte_mbuf *rxm = NULL; + uint16_t sw_ci, hw_ci; + + sw_ci = hinic3_get_rq_local_ci(rxq); + rq_ci.dw1.value = hinic3_hw_cpu32(rte_atomic_load_explicit(&rxq->rq_ci->dw1.value, + rte_memory_order_acquire)); + hw_ci = rq_ci.dw1.bs.hw_ci; + + if (hw_ci == sw_ci) + return false; + + rxm = rxq->rx_info[sw_ci].mbuf; + + *rx_cqe = rte_mbuf_buf_addr(rxm, rxm->pool) + RTE_PKTMBUF_HEADROOM; + + return true; +} + +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + uint32_t dw0 = hinic3_hw_cpu32(rx_cqe->status); + uint32_t dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + uint32_t dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + uint32_t dw3 = hinic3_hw_cpu32(rx_cqe->hash_val); + + cqe_info->lro_num = RQ_CQE_STATUS_GET(dw0, NUM_LRO); + cqe_info->csum_err = RQ_CQE_STATUS_GET(dw0, CSUM_ERR); + + cqe_info->pkt_len = RQ_CQE_SGE_GET(dw1, LEN); + cqe_info->vlan_tag = RQ_CQE_SGE_GET(dw1, VLAN); + + cqe_info->ptype = HINIC3_GET_RX_PTYPE_OFFLOAD(dw0); + cqe_info->vlan_offload = RQ_CQE_OFFOLAD_TYPE_GET(dw2, VLAN_EN); + cqe_info->rss_type = RQ_CQE_OFFOLAD_TYPE_GET(dw2, RSS_TYPE); + cqe_info->rss_hash_value = dw3; +} + +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info) +{ + struct hinic3_rq_cqe *cqe = (struct hinic3_rq_cqe *)rx_cqe; + struct hinic3_cqe_info *info = (struct hinic3_cqe_info *)cqe_info; + uint32_t dw0, dw1, dw2; + + if (rxq->wqe_type != HINIC3_COMPACT_RQ_WQE) { + dw0 = hinic3_hw_cpu32(rx_cqe->status); + dw1 = hinic3_hw_cpu32(rx_cqe->vlan_len); + dw2 = hinic3_hw_cpu32(rx_cqe->offload_type); + } else { + /* Compact Rx CQE mode integrates cqe with packet in big endian way. */ + dw0 = be32_to_cpu(rx_cqe->status); + dw1 = be32_to_cpu(rx_cqe->vlan_len); + dw2 = be32_to_cpu(rx_cqe->offload_type); + } + + cqe_info->cqe_type = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_TYPE); + cqe_info->csum_err = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CSUM_ERR); + cqe_info->vlan_offload = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, VLAN_EN); + cqe_info->cqe_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, CQE_LEN); + cqe_info->pkt_len = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PKT_LEN); + cqe_info->ts_flag = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, TS_FLAG); + cqe_info->ptype = HINIC3_RQ_COMPACT_CQE_STATUS_GET(dw0, PTYPE); + cqe_info->rss_hash_value = dw1; + + if (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) { + cqe_info->lro_num = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, NUM_LRO); + cqe_info->vlan_tag = HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(dw2, VLAN); + } + + if (cqe_info->cqe_type == HINIC3_RQ_CQE_INTEGRATE) + cqe_info->data_offset = + (cqe_info->cqe_len == HINIC3_RQ_COMPACT_CQE_16BYTE) ? 16 : 8; +} + #define HINIC3_RX_EMPTY_THRESHOLD 3 uint16_t hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct hinic3_rxq *rxq = rx_queue; + struct hinic3_nic_dev *nic_dev = rxq->nic_dev; struct hinic3_rx_info *rx_info = NULL; volatile struct hinic3_rq_cqe *rx_cqe = NULL; + struct hinic3_cqe_info cqe_info = {0}; struct rte_mbuf *rxm = NULL; - uint16_t sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0; - uint32_t status, pkt_len, vlan_len, offload_type, lro_num; + uint16_t sw_ci, rx_buf_len, pkts = 0; + uint32_t pkt_len; uint64_t rx_bytes = 0; - uint32_t hash_value; #ifdef HINIC3_XSTAT_PROF_RX uint64_t t1 = rte_get_tsc_cycles(); @@ -953,20 +1091,22 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) goto out; sw_ci = hinic3_get_rq_local_ci(rxq); - rx_buf_len = rxq->buf_len; while (pkts < nb_pkts) { rx_cqe = &rxq->rx_cqe[sw_ci]; - status = hinic3_hw_cpu32((uint32_t)(rte_atomic_load_explicit(&rx_cqe->status, - rte_memory_order_acquire))); - if (!HINIC3_GET_RX_DONE(status)) { + if (!nic_dev->tx_rx_ops.rx_cqe_done(rxq, &rx_cqe)) { rxq->rxq_stats.empty++; break; } - vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len); + nic_dev->tx_rx_ops.rx_get_cqe_info(rxq, rx_cqe, &cqe_info); - pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len); + pkt_len = cqe_info.pkt_len; + /* + * Compact Rx CQE mode integrates cqe with packet, + * so mbuf length needs to remove the length of cqe. + */ + rx_buf_len = rxq->buf_len - cqe_info.data_offset; rx_info = &rxq->rx_info[sw_ci]; rxm = rx_info->mbuf; @@ -982,7 +1122,7 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(pkt_len <= rx_buf_len)) { rxm->data_len = (uint16_t)pkt_len; rxm->pkt_len = pkt_len; - wqebb_cnt++; + hinic3_update_rq_local_ci(rxq, 1); } else { rxm->data_len = rx_buf_len; rxm->pkt_len = rx_buf_len; @@ -991,33 +1131,28 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * If receive jumbo, updating ci will be done by * hinic3_recv_jumbo_pkt function. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1); - wqebb_cnt = 0; + hinic3_update_rq_local_ci(rxq, 1); hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len); sw_ci = hinic3_get_rq_local_ci(rxq); } - rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_off = RTE_PKTMBUF_HEADROOM + cqe_info.data_offset; rxm->port = rxq->port_id; /* 4. Rx checksum offload. */ - rxm->ol_flags |= hinic3_rx_csum(status, rxq); + rxm->ol_flags |= hinic3_rx_csum(cqe_info.csum_err, rxq); /* 5. Vlan offload. */ - offload_type = hinic3_hw_cpu32(rx_cqe->offload_type); - - rxm->ol_flags |= - hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci); + rxm->ol_flags |= hinic3_rx_vlan(cqe_info.vlan_offload, cqe_info.vlan_tag, + &rxm->vlan_tci); /* 6. RSS. */ - hash_value = hinic3_hw_cpu32(rx_cqe->hash_val); - rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value, - &rxm->hash.rss); + rxm->ol_flags |= hinic3_rx_rss_hash(cqe_info.rss_type, cqe_info.rss_hash_value, + &rxm->hash.rss); /* 8. LRO. */ - lro_num = HINIC3_GET_RX_NUM_LRO(status); - if (unlikely(lro_num != 0)) { + if (unlikely(cqe_info.lro_num != 0)) { rxm->ol_flags |= HINIC3_PKT_RX_LRO; - rxm->tso_segsz = pkt_len / lro_num; + rxm->tso_segsz = pkt_len / cqe_info.lro_num; } rx_cqe->status = 0; @@ -1027,9 +1162,6 @@ hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (pkts) { - /* 9. Update local ci. */ - hinic3_update_rq_local_ci(rxq, wqebb_cnt); - /* Update packet stats. */ rxq->rxq_stats.packets += pkts; rxq->rxq_stats.bytes += rx_bytes; diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h index 1a92df59b7..9fc86ae20a 100644 --- a/drivers/net/hinic3/hinic3_rx.h +++ b/drivers/net/hinic3/hinic3_rx.h @@ -122,6 +122,53 @@ #define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD) +/* Compact CQE Field */ +/* cqe dw0 */ +#define RQ_COMPACT_CQE_STATUS_RXDONE_SHIFT 31 +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_SHIFT 30 +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_SHIFT 29 +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_SHIFT 28 +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_SHIFT 25 +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_SHIFT 24 +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_SHIFT 23 +#define RQ_COMPACT_CQE_STATUS_PKT_MC_SHIFT 21 +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_SHIFT 19 +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PTYPE_SHIFT 16 +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_SHIFT 0 + +#define RQ_COMPACT_CQE_STATUS_RXDONE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_TS_FLAG_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_VLAN_EN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_FORMAT_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_IP_TYPE_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CQE_LEN_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_PKT_MC_MASK 0x1U +#define RQ_COMPACT_CQE_STATUS_CSUM_ERR_MASK 0x3U +#define RQ_COMPACT_CQE_STATUS_PKT_TYPE_MASK 0x7U +#define RQ_COMPACT_CQE_STATUS_PTYPE_MASK 0xFFFU +#define RQ_COMPACT_CQE_STATUS_PKT_LEN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_STATUS_GET(val, member) \ + ((((val) >> RQ_COMPACT_CQE_STATUS_##member##_SHIFT) & \ + RQ_COMPACT_CQE_STATUS_##member##_MASK)) + +#define HINIC3_RQ_CQE_SEPARATE 0 +#define HINIC3_RQ_CQE_INTEGRATE 1 + +/* cqe dw2 */ +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_SHIFT 24 +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_SHIFT 8 + +#define RQ_COMPACT_CQE_OFFLOAD_NUM_LRO_MASK 0xFFU +#define RQ_COMPACT_CQE_OFFLOAD_VLAN_MASK 0xFFFFU + +#define HINIC3_RQ_COMPACT_CQE_OFFLOAD_GET(val, member) \ + (((val) >> RQ_COMPACT_CQE_OFFLOAD_##member##_SHIFT) & RQ_COMPACT_CQE_OFFLOAD_##member##_MASK) + +#define HINIC3_RQ_COMPACT_CQE_16BYTE 0 +#define HINIC3_RQ_COMPACT_CQE_8BYTE 1 /* Rx cqe checksum err */ #define HINIC3_RX_CSUM_IP_CSUM_ERR RTE_BIT32(0) #define HINIC3_RX_CSUM_TCP_CSUM_ERR RTE_BIT32(1) @@ -151,6 +198,9 @@ RTE_ETH_RSS_IPV6_TCP_EX | \ RTE_ETH_RSS_IPV6_UDP_EX) +#define HINIC3_COMPACT_CQE_PTYPE_SHIFT 16 + + struct hinic3_rxq_stats { uint64_t packets; uint64_t bytes; @@ -195,6 +245,23 @@ struct __rte_cache_aligned hinic3_rq_cqe { uint32_t pkt_info; }; +struct hinic3_cqe_info { + uint8_t data_offset; + uint8_t lro_num; + uint8_t vlan_offload; + uint8_t cqe_len; + uint8_t cqe_type; + uint8_t ts_flag; + + uint16_t csum_err; + uint16_t vlan_tag; + uint16_t ptype; + uint16_t pkt_len; + uint16_t rss_type; + + uint32_t rss_hash_value; +}; + /** * Attention: please do not add any member in hinic3_rx_info * because rxq bulk rearm mode will write mbuf in rx_info. @@ -220,13 +287,32 @@ struct hinic3_rq_normal_wqe { uint32_t cqe_lo_addr; }; +struct hinic3_rq_compact_wqe { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + struct hinic3_rq_wqe { union { + struct hinic3_rq_compact_wqe compact_wqe; struct hinic3_rq_normal_wqe normal_wqe; struct hinic3_rq_extend_wqe extend_wqe; }; }; +struct hinic3_rq_ci_wb { + union { + struct { + uint16_t cqe_num; + uint16_t hw_ci; + } bs; + uint32_t value; + } dw1; + + uint32_t rsvd[3]; +}; + + struct __rte_cache_aligned hinic3_rxq { struct hinic3_nic_dev *nic_dev; @@ -263,6 +349,10 @@ struct __rte_cache_aligned hinic3_rxq { struct hinic3_rq_cqe *rx_cqe; struct rte_mempool *mb_pool; + const struct rte_memzone *ci_mz; + struct hinic3_rq_ci_wb *rq_ci; + rte_iova_t rq_ci_paddr; + const struct rte_memzone *cqe_mz; rte_iova_t cqe_start_paddr; void *cqe_start_vaddr; @@ -290,6 +380,7 @@ void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev); int hinic3_update_rss_config(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int hinic3_poll_integrated_cqe_rq_empty(struct hinic3_rxq *rxq); int hinic3_poll_rq_empty(struct hinic3_rxq *rxq); void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, uint32_t *cqe_done_cnt, @@ -351,4 +442,60 @@ hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, uint16_t wqe_cnt) rxq->delta += wqe_cnt; } +/** + * Get receive cqe information + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * Receive cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe, + struct hinic3_cqe_info *cqe_info); + +/** + * Get receive compact cqe information + * + * @param[in] rx_queue + * Receive queue + * @param[in] rx_cqe + * Receive compact cqe + * @param[in] cqe_info + * Packet information parsed from cqe + */ +void +hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe + struct hinic3_cqe_info *cqe_info); + +/** + * Check whether pkt is received when CQE is separated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +rx_separate_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + +/** + * Check whether pkt is received when CQE is integrated + * + * @param[in] rxq + * Receive queue + * @param[in] rx_cqe + * The CQE written by hw + * @return + * True: Packet is received + * False: Packet is not received + */ +bool +rx_integrated_cqe_done(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe **rx_cqe); + #endif /* _HINIC3_RX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 6/7] net/hinic3: add tx ops to support Compact CQE 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (4 preceding siblings ...) 2026-01-31 10:06 ` [PATCH 5/7] net/hinic3: add rx " Feifei Wang @ 2026-01-31 10:06 ` Feifei Wang 2026-01-31 10:06 ` [PATCH 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 6 siblings, 0 replies; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:06 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> In pkt send path, use different func callback to configure compact wqe and normal wqe offload. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/hinic3_ethdev.h | 3 +- drivers/net/hinic3/hinic3_tx.c | 463 +++++++++++++++-------------- drivers/net/hinic3/hinic3_tx.h | 144 +++++++-- 3 files changed, 367 insertions(+), 243 deletions(-) diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h index 896f015341..3de542fffd 100644 --- a/drivers/net/hinic3/hinic3_ethdev.h +++ b/drivers/net/hinic3/hinic3_ethdev.h @@ -99,7 +99,8 @@ TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow); TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow); /* Tx WQE offload set callback function */ -typedef void (*nic_tx_set_wqe_offload_t)(void *wqe_info, void *wqe_combo); +typedef void (*nic_tx_set_wqe_offload_t)(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); /* Rx CQE info get callback function */ typedef void (*nic_rx_get_cqe_info_t)(struct hinic3_rxq *rx_queue, volatile struct hinic3_rq_cqe *rx_cqe, diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c index c896fcc76b..534619d32f 100644 --- a/drivers/net/hinic3/hinic3_tx.c +++ b/drivers/net/hinic3/hinic3_tx.c @@ -21,6 +21,7 @@ #define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1 #define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 +#define MAX_TSO_NUM_FRAG 1024 #define HINIC3_TX_OFFLOAD_MASK \ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT) @@ -28,7 +29,8 @@ #define HINIC3_TX_CKSUM_OFFLOAD_MASK \ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \ - HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG) + HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_OUTER_UDP_CKSUM | \ + HINIC3_PKT_TX_TCP_SEG) static inline uint16_t hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq) @@ -56,26 +58,23 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq) } static void * -hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) +hinic3_get_sq_wqe(struct hinic3_txq *sq, uint16_t num_wqebbs, uint16_t prod_idx) { - uint16_t cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx); - uint32_t end_pi; + *prod_idx = MASKED_QUEUE_IDX(sq, sq->prod_idx); + sq->prod_idx += num_wqebbs; - end_pi = cur_pi + wqe_info->wqebb_cnt; - sq->prod_idx += wqe_info->wqebb_cnt; + return NIC_WQE_ADDR(sq, *prod_idx); +} - wqe_info->owner = (uint8_t)(sq->owner); - wqe_info->pi = cur_pi; - wqe_info->wrapped = 0; +static inline uint16_t +hinic3_get_and_update_sq_owner(struct hinic3_txq *sq, uint16_t curr_pi, uint16_t wqebb_cnt) +{ + uint16_t owner = sq->owner; - if (unlikely(end_pi >= sq->q_depth)) { + if (unlikely(curr_pi + wqebb_cnt >= sq->q_depth)) sq->owner = !sq->owner; - if (likely(end_pi > sq->q_depth)) - wqe_info->wrapped = (uint8_t)(sq->q_depth - cur_pi); - } - - return NIC_WQE_ADDR(sq, cur_pi); + return owner; } static inline void @@ -90,61 +89,39 @@ hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info) /** * Sets the WQE combination information in the transmit queue (SQ). * - * @param[in] txq + * @param[in] sq * Point to send queue. * @param[out] wqe_combo * Point to wqe_combo of send queue(SQ). - * @param[in] wqe - * Point to wqe of send queue(SQ). * @param[in] wqe_info * Point to wqe_info of send queue(SQ). */ static void -hinic3_set_wqe_combo(struct hinic3_txq *txq, +hinic3_set_wqe_combo(struct hinic3_txq *sq, struct hinic3_sq_wqe_combo *wqe_combo, - struct hinic3_sq_wqe *wqe, struct hinic3_wqe_info *wqe_info) { - wqe_combo->hdr = &wqe->compact_wqe.wqe_desc; - - if (wqe_info->offload) { - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->task = (struct hinic3_sq_task *) - (void *)txq->sq_head_addr; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr + txq->wqebb_size); - } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->task = &wqe->extend_wqe.task; - wqe_combo->bds_head = wqe->extend_wqe.buf_desc; - } + uint16_t tmp_pi; - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->hdr = hinic3_sq_get_wqebbs(sq, 1, &wqe_info->pi); + if (wqe_info->wqebb_cnt == 1) { + /* compact wqe */ + wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_4BYTES; + wqe_combo->task = (struct hinic3_sq_task *)&wqe_combo->hdr->queue_info; + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, 1); return; } - if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) { - wqe_combo->bds_head = (struct hinic3_sq_bufdesc *) - (void *)(txq->sq_head_addr); - } else { - wqe_combo->bds_head = - (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task); - } + /* extend normal wqe */ + wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; + wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES; + wqe_combo->task = hinic3_sq_get_wqebbs(sq, 1, &tmp_pi); + if (wqe_info->sge_cnt > 1) + wqe_combo->bds_head = hinic3_sq_get_wqebbs(sq, wqe_info->sge_cnt - 1, &tmp_pi); - if (wqe_info->wqebb_cnt > 1) { - wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE; - wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS; - - /* This section used as vlan insert, needs to clear. */ - wqe_combo->bds_head->rsvd = 0; - } else { - wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE; - } + wqe_info->owner = hinic3_get_and_update_sq_owner(sq, wqe_info->pi, wqe_info->wqebb_cnt); } int @@ -311,6 +288,8 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) /** * Prepare the data packet to be sent and calculate the internal L3 offset. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to be processed. * @param[out] inner_l3_offset @@ -319,14 +298,20 @@ hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt) * 0 as success, -EINVAL as failure. */ static int -hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) +hinic3_tx_offload_pkt_prepare(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + uint16_t *inner_l3_offset) { uint64_t ol_flags = mbuf->ol_flags; - /* Only support vxlan offload. */ - if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) && - (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN))) - return -EINVAL; + if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) { + if (!(((ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN) && + HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_GENEVE) && + HINIC3_SUPPORT_GENEVE_OFFLOAD(nic_dev)) || + ((ol_flags & HINIC3_PKT_TX_TUNNEL_IPIP) && + HINIC3_SUPPORT_IPIP_OFFLOAD(nic_dev)))) + return -EINVAL; + } #ifdef RTE_LIBRTE_ETHDEV_DEBUG if (rte_validate_tx_offload(mbuf) != 0) @@ -358,107 +343,121 @@ hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, uint16_t *inner_l3_offset) return 0; } -static inline void -hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, uint16_t vlan_tag, - uint8_t vlan_type) +void +hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) | - SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) | - SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID); + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; + + task->pkt_info0 = 0; + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l4_en, INNER_L4_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->inner_l3_en, INNER_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->encapsulation, TUNNEL_FLAG); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l3_en, OUT_L3_EN); + task->pkt_info0 |= SQ_TASK_INFO0_SET(offload_info->out_l4_en, OUT_L4_EN); + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); + + if (wqe_combo->task_type == SQ_WQE_TASKSECT_16BYTES) { + task->ip_identify = 0; + task->pkt_info2 = 0; + task->vlan_offload = 0; + task->vlan_offload = SQ_TASK_INFO3_SET(offload_info->vlan_tag, VLAN_TAG) | + SQ_TASK_INFO3_SET(offload_info->vlan_sel, VLAN_TYPE) | + SQ_TASK_INFO3_SET(offload_info->vlan_valid, VLAN_TAG_VALID); + task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + } } -/** - * Set the corresponding offload information based on ol_flags of the mbuf. - * - * @param[in] mbuf - * Point to the mbuf for which offload needs to be set in the sending queue. - * @param[out] task - * Point to task of send queue(SQ). - * @param[out] wqe_info - * Point to wqe_info of send queue(SQ). - * @return - * 0 as success, -EINVAL as failure. - */ -static int -hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task, - struct hinic3_wqe_info *wqe_info) +void +hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo) { - uint64_t ol_flags = mbuf->ol_flags; - uint16_t pld_offset = 0; - uint32_t queue_info = 0; - uint16_t vlan_tag; + struct hinic3_sq_task *task = wqe_combo->task; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; task->pkt_info0 = 0; - task->ip_identify = 0; - task->pkt_info2 = 0; - task->vlan_offload = 0; + wqe->task->pkt_info0 = + SQ_TASK_INFO_SET(offload_info->out_l3_en, OUT_L3_EN) | + SQ_TASK_INFO_SET(offload_info->out_l4_en, OUT_L4_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l3_en, INNER_L3_EN) | + SQ_TASK_INFO_SET(offload_info->inner_l4_en, INNER_L4_EN) | + SQ_TASK_INFO_SET(offload_info->vlan_valid, VLAN_VALID) | + SQ_TASK_INFO_SET(offload_info->vlan_sel, VLAN_SEL) | + SQ_TASK_INFO_SET(offload_info->vlan_tag, VLAN_TAG); + + task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); +} + +static int +hinic3_set_tx_offload(struct hinic3_nic_dev *nic_dev, + struct rte_mbuf *mbuf, + struct hinic3_sq_wqe_combo *wqe_combo, + struct hinic3_wqe_info *wqe_info) +{ + uint64_t ol_flags = mbuf->ol_flags; + struct hinic3_offload_info *offload_info = &wqe_info->offload_info; /* Vlan offload. */ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) { - vlan_tag = mbuf->vlan_tci; - hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0); - task->vlan_offload = hinic3_hw_be32(task->vlan_offload); + offload_info->vlan_valid = 1; + offload_info->vlan_tag = mbuf->vlan_tci; + offload_info->vlan_sel = HINIC3_TX_TPID0; } - /* Cksum offload. */ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK)) - return 0; + goto set_tx_wqe_offload; /* Tso offload. */ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) { - pld_offset = wqe_info->payload_offset; - if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET) + wqe_info->queue_info.payload_offset = wqe_info->payload_offset; + if ((wqe_info->payload_offset >> 1) > MAX_PAYLOAD_OFFSET) return -EINVAL; - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); - - queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF); - - /* Set MSS value. */ - queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS); - queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS); + offload_info->inner_l3_en = 1; + offload_info->inner_l4_en = 1; + wqe_info->queue_info.tso = 1; + wqe_info->queue_info.mss = mbuf->tso_segsz; } else { if (ol_flags & HINIC3_PKT_TX_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN); + offload_info->inner_l3_en = 1; switch (ol_flags & HINIC3_PKT_TX_L4_MASK) { case HINIC3_PKT_TX_TCP_CKSUM: case HINIC3_PKT_TX_UDP_CKSUM: case HINIC3_PKT_TX_SCTP_CKSUM: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN); + offload_info->inner_l4_en = 1; break; - case HINIC3_PKT_TX_L4_NO_CKSUM: break; - default: PMD_DRV_LOG(INFO, "not support pkt type"); return -EINVAL; } } - /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) { case HINIC3_PKT_TX_TUNNEL_VXLAN: - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG); + case HINIC3_PKT_TX_TUNNEL_VXLAN_GPE: + case HINIC3_PKT_TX_TUNNEL_GENEVE: + offload_info->encapsulation = 1; + wqe_info->queue_info.udp_dp_en = 1; break; - case 0: break; default: - /* For non UDP/GRE tunneling, drop the tunnel packet. */ PMD_DRV_LOG(INFO, "not support tunnel pkt type"); return -EINVAL; } if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) - task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN); + offload_info->out_l3_en = 1; - task->pkt_info0 = hinic3_hw_be32(task->pkt_info0); - task->pkt_info2 = hinic3_hw_be32(task->pkt_info2); - wqe_info->queue_info = queue_info; + if (ol_flags & HINIC3_PKT_TX_OUTER_UDP_CKSUM) + offload_info->out_l4_en = 1; + +set_tx_wqe_offload: + nic_dev->tx_rx_ops.tx_set_wqe_offload(wqe_info, wqe_combo); return 0; } @@ -477,7 +476,9 @@ static bool hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) { uint32_t total_len, limit_len, checked_len, left_len, adjust_mss; - uint32_t i, max_sges, left_sges, first_len; + uint32_t max_sges, left_sges, first_len; + uint32_t payload_len, frag_num; + uint32_t i; struct rte_mbuf *mbuf_head, *mbuf_first; struct rte_mbuf *mbuf_pre = mbuf; @@ -485,6 +486,17 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) mbuf_head = mbuf; mbuf_first = mbuf; + /* Calculate the number of message payload frag, + * if it exceeds the hardware limit of 10 bits, + * packet will be discarded. + */ + payload_len = mbuf_head->pkt_len - wqe_info->payload_offset; + frag_num = (payload_len + mbuf_head->tso_segsz - 1) / mbuf_head->tso_segsz; + if (frag_num > MAX_TSO_NUM_FRAG) { + PMD_DRV_LOG(WARNING, "tso frag num over hw limit, frag_num:0x%x.", frag_num); + return false; + } + /* Tso sge number validation. */ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) { checked_len = 0; @@ -544,9 +556,48 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) return true; } +static int +hinic3_non_tso_pkt_pre_process(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +{ + struct rte_mbuf *mbuf_pkt = mbuf; + uint16_t total_len = 0; + uint16_t i; + + if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) + return 0; + + /* Non-tso packet length must less than 64KB. */ + if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) + return -EINVAL; + + /* + * Mbuf number of non-tso packet must less than the sge number + * that nic can support. The excess part will be copied to another + * mbuf. + */ + for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { + total_len += mbuf_pkt->data_len; + mbuf_pkt = mbuf_pkt->next; + } + + /* + * Max copy mbuf size is 4KB, packet will be dropped directly, + * if total copy length is more than it. + */ + if ((total_len + HINIC3_COPY_MBUF_SIZE) < mbuf->pkt_len) + return -EINVAL; + + wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; + wqe_info->cpy_mbuf = 1; + + return 0; +} + /** * Checks and processes transport offload information for data packets. * + * @param[in] nic_dev + * Pointer to NIC device structure. * @param[in] mbuf * Point to the mbuf to send. * @param[in] wqe_info @@ -555,56 +606,29 @@ hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) * 0 as success, -EINVAL as failure. */ static int -hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info) +hinic3_get_tx_offload(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf, + struct hinic3_wqe_info *wqe_info) { uint64_t ol_flags = mbuf->ol_flags; - uint16_t i, total_len, inner_l3_offset = 0; + uint16_t inner_l3_offset = 0; int err; - struct rte_mbuf *mbuf_pkt = NULL; wqe_info->sge_cnt = mbuf->nb_segs; + wqe_info->cpy_mbuf_cnt = 0; /* Check if the packet set available offload flags. */ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) { wqe_info->offload = 0; - return 0; + return hinic3_non_tso_pkt_pre_process(mbuf, wqe_info); } wqe_info->offload = 1; - err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset); + err = hinic3_tx_offload_pkt_prepare(nic_dev, mbuf, &inner_l3_offset); if (err) return err; - /* Non tso mbuf only check sge num. */ + /* Non-tso mbuf only check sge num. */ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) { - if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE)) - /* Non tso packet len must less than 64KB. */ - return -EINVAL; - - if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs))) - /* Valid non-tso mbuf. */ - return 0; - - /* - * The number of non-tso packet fragments must be less than 38, - * and mbuf segs greater than 38 must be copied to other - * buffers. - */ - total_len = 0; - mbuf_pkt = mbuf; - for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) { - total_len += mbuf_pkt->data_len; - mbuf_pkt = mbuf_pkt->next; - } - - /* Default support copy total 4k mbuf segs. */ - if ((uint32_t)(total_len + (uint16_t)HINIC3_COPY_MBUF_SIZE) < - mbuf->pkt_len) - return -EINVAL; - - wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE; - wqe_info->cpy_mbuf_cnt = 1; - - return 0; + return inic3_non_tso_pkt_pre_process(mbuf, wqe_info); } /* Tso mbuf. */ @@ -629,6 +653,7 @@ hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr, buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr)); buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr)); buf_descs->len = hinic3_hw_be32(len); + buf_descs->rsvd = 0; } static inline struct rte_mbuf * @@ -701,7 +726,6 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, { struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head; - uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt; uint16_t real_segs = mbuf->nb_segs; rte_iova_t dma_addr; @@ -736,11 +760,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, * Parts of wqe is in sq bottom while parts * of wqe is in sq head. */ - if (unlikely(wqe_info->wrapped && - (uint64_t)buf_desc == txq->sq_bot_sge_addr)) - buf_desc = (struct hinic3_sq_bufdesc *) - (void *)txq->sq_head_addr; - + if (unlikely((uint64_t)buf_desc == txq->sq_bot_sge_addr)) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); buf_desc++; } @@ -777,10 +798,8 @@ hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf, hinic3_hw_be32(lower_32_bits(dma_addr)); wqe_desc->ctrl_len = mbuf->data_len; } else { - if (unlikely(wqe_info->wrapped && - ((uint64_t)buf_desc == txq->sq_bot_sge_addr))) - buf_desc = (struct hinic3_sq_bufdesc *) - txq->sq_head_addr; + if (unlikely(((uint64_t)buf_desc == txq->sq_bot_sge_addr))) + buf_desc = (struct hinic3_sq_bufdesc *)txq->sq_head_addr; hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len); } @@ -802,44 +821,44 @@ static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo, struct hinic3_wqe_info *wqe_info) { + struct hinic3_queue_info *queue_info = &wqe_info->queue_info; struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr; - - if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) { - wqe_desc->ctrl_len |= - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - /* Compact wqe queue_info will transfer to ucode. */ - wqe_desc->queue_info = 0; - - return; + uint32_t *qsf = &wqe_desc->queue_info; + + wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DIRECT) | + SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | + SQ_CTRL_SET(wqe_info->owner, OWNER); + + if (wqe_combo->wqe_type == SQ_WQE_EXTENDED_TYPE) { + wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | + SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | + SQ_CTRL_SET(SQ_WQE_SGL, DATA_FORMAT); + + *qsf = SQ_CTRL_QUEUE_INFO_SET(1, UC) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->udp_dp_en, TCPUDP_CS) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->tso, TSO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->payload_offset >> 1, PLDOFF) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE) | + SQ_CTRL_QUEUE_INFO_SET(queue_info->mss, MSS); + + if (!SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS)) { + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); + } else if (SQ_CTRL_QUEUE_INFO_GET(*qsf, MSS) < TX_MSS_MIN) { + /* MSS should not less than 80. */ + *qsf = SQ_CTRL_QUEUE_INFO_CLEAR(*qsf, MSS); + *qsf |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); + } + *qsf = hinic3_hw_be32(*qsf); + } else { + wqe_desc->ctrl_len |= SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->sctp, SCTP) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->udp_dp_en, UDP_DP_EN) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->ufo, UFO) | + SQ_CTRL_COMPACT_QUEUE_INFO_SET(queue_info->pkt_type, PKT_TYPE); } - wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) | - SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) | - SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) | - SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) | - SQ_CTRL_SET(wqe_info->owner, OWNER); - wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len); - - wqe_desc->queue_info = wqe_info->queue_info; - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC); - - if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) { - wqe_desc->queue_info |= - SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS); - } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) < - TX_MSS_MIN) { - /* Mss should not less than 80. */ - wqe_desc->queue_info = - SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS); - wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS); - } - - wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info); } /** @@ -863,7 +882,6 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hinic3_sq_wqe_combo wqe_combo = {0}; struct hinic3_sq_wqe *sq_wqe = NULL; struct hinic3_wqe_info wqe_info = {0}; - uint32_t offload_err, free_cnt; uint64_t tx_bytes = 0; uint16_t free_wqebb_cnt, nb_tx; @@ -885,16 +903,26 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Tx loop routine. */ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { mbuf_pkt = *tx_pkts++; - if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) { + if (unlikely(hinic3_get_tx_offload(txq->nic_dev, mbuf_pkt, &wqe_info))) { txq->txq_stats.offload_errors++; break; } - if (!wqe_info.offload) - wqe_info.wqebb_cnt = wqe_info.sge_cnt; - else - /* Use extended sq wqe with normal TS. */ - wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1; + wqe_info.wqebb_cnt = wqe_info.sge_cnt; + if (likely(wqe_info.offload || wqe_info.wqebb_cnt > 1)) { + if (txq->tx_wqe_compact_task) { + /** + * One more wqebb is needed for compact task under two situations: + * 1. TSO: MSS field is needed, no available space for + * compact task in compact wqe. + * 2. SGE number > 1: wqe is handlerd as extented wqe by nic. + */ + if (mbuf_pkt->ol_flags & HINIC3_PKT_TX_TCP_SEG || wqe_info.wqebb_cnt > 1) + wqe_info.wqebb_cnt++; + } else + /* Use extended sq wqe with normal TS */ + wqe_info.wqebb_cnt++; + } free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq); if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) { @@ -907,28 +935,17 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Get sq wqe address from wqe_page. */ - sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info); - if (unlikely(!sq_wqe)) { - txq->txq_stats.tx_busy++; - break; - } - - /* Task or bd section maybe wrapped for one wqe. */ - hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info); + /* Task or bd section maybe warpped for one wqe. */ + hinic3_set_wqe_combo(txq, &wqe_combo, &wqe_info); - wqe_info.queue_info = 0; /* Fill tx packet offload into qsf and task field. */ - if (wqe_info.offload) { - offload_err = hinic3_set_tx_offload(mbuf_pkt, - wqe_combo.task, - &wqe_info); + offload_err = hinic3_set_tx_offload(txq->nic_dev, mbuf_pkt, + &wqe_combo, &wqe_info); if (unlikely(offload_err)) { hinic3_put_sq_wqe(txq, &wqe_info); txq->txq_stats.offload_errors++; break; } - } /* Fill sq_wqe buf_desc and bd_desc. */ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo, @@ -944,7 +961,13 @@ hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_info->mbuf = mbuf_pkt; tx_info->wqebb_cnt = wqe_info.wqebb_cnt; - hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); + + /* + * For wqe compact type, no need to prepare + * sq ctrl info. + */ + if (wqe_combo.wqe_type != SQ_WQE_COMPACT_TYPE) + hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info); tx_bytes += mbuf_pkt->pkt_len; } @@ -998,8 +1021,8 @@ hinic3_stop_sq(struct hinic3_txq *txq) hinic3_get_sq_local_ci(txq), hinic3_get_sq_hw_ci(txq), MASKED_QUEUE_IDX(txq, txq->prod_idx), - free_wqebbs, - txq->q_depth); + free_wqebbs, + txq->q_depth); } return err; diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h index d150f7c6a4..f604eaf43b 100644 --- a/drivers/net/hinic3/hinic3_tx.h +++ b/drivers/net/hinic3/hinic3_tx.h @@ -6,30 +6,40 @@ #define _HINIC3_TX_H_ #define MAX_SINGLE_SGE_SIZE 65536 -#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */ +#define HINIC3_NONTSO_PKT_MAX_SGE 32 /**< non-tso max sge 32. */ #define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE) #define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */ #define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE) -/* Tx offload info. */ -struct hinic3_tx_offload_info { - uint8_t outer_l2_len; - uint8_t outer_l3_type; - uint16_t outer_l3_len; - - uint8_t inner_l2_len; - uint8_t inner_l3_type; - uint16_t inner_l3_len; - - uint8_t tunnel_length; - uint8_t tunnel_type; - uint8_t inner_l4_type; - uint8_t inner_l4_len; +/* Tx wqe queue info */ +struct hinic3_queue_info { + uint8_t pri; + uint8_t uc; + uint8_t sctp; + uint8_t udp_dp_en; + uint8_t tso; + uint8_t ufo; + uint8_t payload_offset; + uint8_t pkt_type; + uint16_t mss; + uint16_t rsvd; +}; - uint16_t payload_offset; - uint8_t inner_l4_tcp_udp; - uint8_t rsvd0; /**< Reserved field. */ +/* Tx wqe offload info */ +struct hinic3_offload_info { + uint8_t encapsulation; + uint8_t esp_next_proto; + uint8_t inner_l4_en; + uint8_t inner_l3_en; + uint8_t out_l4_en; + uint8_t out_l3_en; + uint8_t ipsec_offload; + uint8_t pkt_1588; + uint8_t vlan_sel; + uint8_t vlan_valid; + uint16_t vlan_tag; + uint32_t ip_identify; }; /* Tx wqe ctx. */ @@ -42,14 +52,15 @@ struct hinic3_wqe_info { uint8_t rsvd0; /**< Reserved field 0. */ uint16_t payload_offset; - uint8_t wrapped; + uint8_t rsvd1; /**< Reserved field 1. */ uint8_t owner; uint16_t pi; uint16_t wqebb_cnt; - uint16_t rsvd1; /**< Reserved field 1. */ + uint16_t rsvd2; /**< Reserved field 2. */ - uint32_t queue_info; + struct hinic3_queue_info queue_info; + struct hinic3_offload_info offload_info; }; /* Descriptor for the send queue of wqe. */ @@ -103,8 +114,15 @@ struct hinic3_sq_wqe_combo { uint32_t task_type; }; +/* Tx queue ctrl info */ +enum sq_wqe_type { + SQ_NORMAL_WQE = 0, + SQ_DIRECT_WQE = 1, +}; + enum sq_wqe_data_format { SQ_NORMAL_WQE = 0, + SQ_WQE_INLINE_DATA = 1, }; /* Indicates the type of a WQE. */ @@ -117,7 +135,7 @@ enum sq_wqe_ec_type { /* Indicates the type of tasks with different lengths. */ enum sq_wqe_tasksect_len_type { - SQ_WQE_TASKSECT_46BITS = 0, + SQ_WQE_TASKSECT_4BYTES = 0, SQ_WQE_TASKSECT_16BYTES = 1, }; @@ -177,6 +195,33 @@ enum sq_wqe_tasksect_len_type { ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT))) +/* Compact queue info */ +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_SHIFT 14 +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_SHIFT 16 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_SHIFT 24 +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_SHIFT 25 +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_SHIFT 26 +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_SHIFT 27 + +#define SQ_CTRL_COMPACT_QUEUE_INFO_PKT_TYPE_MASK 0x3U +#define SQ_CTRL_COMPACT_QUEUE_INFO_PLDOFF_MASK 0xFFU +#define SQ_CTRL_COMPACT_QUEUE_INFO_UFO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_TSO_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_UDP_DP_EN_MASK 0x1U +#define SQ_CTRL_COMPACT_QUEUE_INFO_SCTP_MASK 0x1U + +#define SQ_CTRL_COMPACT_QUEUE_INFO_SET(val, member) \ + (((u32)(val) & SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_GET(val, member) \ + (((val) >> SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT) & \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK) + +#define SQ_CTRL_COMPACT_QUEUE_INFO_CLEAR(val, member) \ + ((val) & (~(SQ_CTRL_COMPACT_QUEUE_INFO_##member##_MASK << \ + SQ_CTRL_COMPACT_QUEUE_INFO_##member##_SHIFT))) + /* Setting and obtaining task information */ #define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19 #define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22 @@ -229,6 +274,37 @@ enum sq_wqe_tasksect_len_type { (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \ SQ_TASK_INFO3_##member##_MASK) +/* compact wqe task field */ +#define SQ_TASK_INFO_PKT_1588_SHIFT 31 +#define SQ_TASK_INFO_IPSEC_PROTO_SHIFT 30 +#define SQ_TASK_INFO_OUT_L3_EN_SHIFT 28 +#define SQ_TASK_INFO_OUT_L4_EN_SHIFT 27 +#define SQ_TASK_INFO_INNER_L3_EN_SHIFT 25 +#define SQ_TASK_INFO_INNER_L4_EN_SHIFT 24 +#define SQ_TASK_INFO_ESP_NEXT_PROTO_SHIFT 22 +#define SQ_TASK_INFO_VLAN_VALID_SHIFT 19 +#define SQ_TASK_INFO_VLAN_SEL_SHIFT 16 +#define SQ_TASK_INFO_VLAN_TAG_SHIFT 0 + +#define SQ_TASK_INFO_PKT_1588_MASK 0x1U +#define SQ_TASK_INFO_IPSEC_PROTO_MASK 0x1U +#define SQ_TASK_INFO_OUT_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_OUT_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L3_EN_MASK 0x1U +#define SQ_TASK_INFO_INNER_L4_EN_MASK 0x1U +#define SQ_TASK_INFO_ESP_NEXT_PROTO_MASK 0x3U +#define SQ_TASK_INFO_VLAN_VALID_MASK 0x1U +#define SQ_TASK_INFO_VLAN_SEL_MASK 0x7U +#define SQ_TASK_INFO_VLAN_TAG_MASK 0xFFFFU + +#define SQ_TASK_INFO_SET(val, member) \ + (((u32)(val) & SQ_TASK_INFO_##member##_MASK) << \ + SQ_TASK_INFO_##member##_SHIFT) + +#define SQ_TASK_INFO_GET(val, member) \ + (((val) >> SQ_TASK_INFO_##member##_SHIFT) & \ + SQ_TASK_INFO_##member##_MASK) + /* Defines the TX queue status. */ enum hinic3_txq_status { HINIC3_TXQ_STATUS_START = 0, @@ -298,6 +374,8 @@ struct __rte_cache_aligned hinic3_txq { uint64_t sq_head_addr; uint64_t sq_bot_sge_addr; uint32_t cos; + uint8_t tx_wqe_compact_task; + uint8_t rsvd[3]; struct hinic3_txq_stats txq_stats; #ifdef HINIC3_XSTAT_PROF_TX uint64_t prof_tx_end_tsc; @@ -311,4 +389,26 @@ uint16_t hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb int hinic3_stop_sq(struct hinic3_txq *txq); int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev); int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt); + +/** + * Set wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_normal_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); + +/** + * Set compact wqe task section + * + * @param[in] wqe_info + * Packet info parsed according to mbuf + * @param[in] wqe_combo + * Wqe need to format + */ +void hinic3_tx_set_compact_task_offload(struct hinic3_wqe_info *wqe_info, + struct hinic3_sq_wqe_combo *wqe_combo); #endif /**< _HINIC3_TX_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [PATCH 7/7] net/hinic3: use different callback func to support htn fdir 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang ` (5 preceding siblings ...) 2026-01-31 10:06 ` [PATCH 6/7] net/hinic3: add tx " Feifei Wang @ 2026-01-31 10:06 ` Feifei Wang 2026-01-31 18:17 ` [REVIEW] net/hinic3: use different callback func to support htnfdir Stephen Hemminger 6 siblings, 1 reply; 80+ messages in thread From: Feifei Wang @ 2026-01-31 10:06 UTC (permalink / raw) To: dev; +Cc: Feifei Wang From: Feifei Wang <wangfeifei40@huawei.com> For new SPx NIC, the way flow rules created is different from previous SPx NIC, so use different callback func to split them. Signed-off-by: Feifei Wang <wangfeifei40@huawei.com> --- drivers/net/hinic3/base/hinic3_nic_cfg.c | 43 +- drivers/net/hinic3/base/hinic3_nic_cfg.h | 22 +- drivers/net/hinic3/hinic3_fdir.c | 589 ++++++++++++++++------- drivers/net/hinic3/hinic3_fdir.h | 373 ++++++++++++-- 4 files changed, 778 insertions(+), 249 deletions(-) diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c index f12a2aedee..7aa9cf9cb0 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.c +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c @@ -1026,7 +1026,7 @@ hinic3_set_rx_lro_timer(struct hinic3_hwdev *hwdev, uint32_t timer_value) } int -hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len) { uint8_t ipv4_en = 0, ipv6_en = 0; @@ -1465,38 +1465,41 @@ hinic3_vf_get_default_cos(struct hinic3_hwdev *hwdev, uint8_t *cos_id) return 0; } -/** - * Set the Ethernet type filtering rule for the FDIR of a NIC. - * - * @param[in] hwdev - * Pointer to hardware device structure. - * @param[in] pkt_type - * Indicate the packet type. - * @param[in] queue_id - * Indicate the queue id. - * @param[in] en - * Indicate whether to add or delete an operation. 1 - add; 0 - delete. - * - * @return - * 0 on success, non-zero on failure. - */ int -hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en) +hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, uint8_t pkt_type, + struct hinic3_ethertype_filter *ethertype_filter, + uint8_t en) { + struct hinic3_nic_dev *nic_dev = NULL; struct hinic3_set_fdir_ethertype_rule ethertype_cmd; uint16_t out_size = sizeof(ethertype_cmd); + uint16_t block_id; + uint32_t index = 0; int err; - if (!hwdev) + if (!hwdev || !hwdev->dev_handle) return -EINVAL; + nic_dev = (struct hinic3_nic_dev*)hwdev->dev_handle; + + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + if (en != 0) { + index = hinic3_tcam_alloc_index(nic_dev, &block_id); + if (index == HINIC3_TCAM_INVALID_INDEX) { + return -ENOMEM; + } + + index += HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_id); + } else { + index = ethertype_filter->tcam_index[pkt_type]; + } memset(ðertype_cmd, 0, sizeof(struct hinic3_set_fdir_ethertype_rule)); ethertype_cmd.func_id = hinic3_global_func_id(hwdev); ethertype_cmd.pkt_type = pkt_type; ethertype_cmd.pkt_type_en = en; - ethertype_cmd.qid = (uint8_t)queue_id; + ethertype_cmd.index = index; + ethertype_cmd.qid = (uint8_t)ethertype_filter->queue; err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, HINIC3_NIC_CMD_SET_FDIR_STATUS, diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h index 34372a0678..024bbb9bf8 100644 --- a/drivers/net/hinic3/base/hinic3_nic_cfg.h +++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h @@ -1204,7 +1204,7 @@ int hinic3_set_rx_vlan_offload(struct hinic3_hwdev *hwdev, uint8_t en); * @return * 0 on success, non-zero on failure. */ -int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, uint8_t lro_en, uint32_t lro_timer, +int hinic3_set_rx_lro_state(struct hinic3_hwdev *hwdev, bool lro_en, uint32_t lro_timer, uint32_t lro_max_pkt_len); /** @@ -1523,8 +1523,24 @@ int hinic3_get_feature_from_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, */ int hinic3_set_feature_to_hw(struct hinic3_hwdev *hwdev, uint64_t *s_feature, uint16_t size); -int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, - uint8_t pkt_type, uint16_t queue_id, uint8_t en); +/** + * Set the Ethernet type filtering rule for the FDIR of a NIC. + * + * @param[in] hwdev + * Pointer to hardware device structure. + * @param[in] pkt_type + * Indicate the packet type. + * @param[in] ethertype_filter + * Pointer to ethertype_filter structure. + * @param[in] en + * Indicate whether to add or delete an operation. 1 - add; 0 - delete. + * + * @return + * 0 on success, non-zero on failure. + */ +int hinic3_set_fdir_ethertype_filter(struct hinic3_hwdev *hwdev, uint8_t pkt_type, + struct hinic3_ethertype_filter *ethertype_filter, + uint8_t en); int hinic3_set_link_status_follow(struct hinic3_hwdev *hwdev, enum hinic3_link_follow_status status); diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c index 263a281729..47dbd519d9 100644 --- a/drivers/net/hinic3/hinic3_fdir.c +++ b/drivers/net/hinic3/hinic3_fdir.c @@ -8,9 +8,7 @@ #include "base/hinic3_nic_cfg.h" #include "hinic3_ethdev.h" -#define HINIC3_UINT1_MAX 0x1 -#define HINIC3_UINT4_MAX 0xf -#define HINIC3_UINT15_MAX 0x7fff +#define HINIC3_INVALID_INDEX -1 #define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \ (&((struct hinic3_nic_dev *)(nic_dev))->tcam) @@ -77,6 +75,8 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src IPv4. */ tcam_key->key_mask.sipv4_h = @@ -99,15 +99,9 @@ hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip); } -static void -hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) +static void hinic3_fdir_ipv6_tcam_key_init_sip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) { - /* Fill type of ip. */ - tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; - tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; - - /* Fill src IPv6. */ tcam_key->key_mask_ipv6.sipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); tcam_key->key_mask_ipv6.sipv6_key1 = @@ -140,8 +134,11 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); tcam_key->key_info_ipv6.sipv6_key7 = HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); +} - /* Fill dst IPv6. */ +static void hinic3_fdir_ipv6_tcam_key_init_dip(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ tcam_key->key_mask_ipv6.dipv6_key0 = HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); tcam_key->key_mask_ipv6.dipv6_key1 = @@ -176,6 +173,26 @@ hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); } +static void hinic3_fdir_ipv6_tcam_key_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + hinic3_fdir_ipv6_tcam_key_init_sip(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init_dip(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + /* Fill type of ip. */ + tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6.vlan_flag = 0; + + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); +} + /** * Set the TCAM information in notunnel scenario. * @@ -204,6 +221,10 @@ hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev, tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + + tcam_key->key_mask.vlan_flag = 1; + tcam_key->key_info.vlan_flag = 0; + tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -223,6 +244,8 @@ hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + tcam_key->key_mask.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info.vlan_flag = 0; /* Fill src ipv4. */ tcam_key->key_mask.sipv4_h = @@ -252,6 +275,8 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, /* Fill type of ip. */ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_vxlan_ipv6.vlan_flag = HINIC3_UINT1_MAX; + tcam_key->key_info_vxlan_ipv6.vlan_flag = 0; /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 = @@ -288,77 +313,6 @@ hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule, HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]); } -static void -hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule, - struct hinic3_tcam_key *tcam_key) -{ - tcam_key->key_mask_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]); - tcam_key->key_mask_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]); - tcam_key->key_mask_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]); - tcam_key->key_mask_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_mask_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]); - tcam_key->key_info_ipv6.sipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]); - tcam_key->key_info_ipv6.sipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]); - tcam_key->key_info_ipv6.sipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - tcam_key->key_info_ipv6.sipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]); - - tcam_key->key_mask_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]); - tcam_key->key_mask_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]); - tcam_key->key_mask_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]); - tcam_key->key_mask_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_mask_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key0 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key1 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]); - tcam_key->key_info_ipv6.dipv6_key2 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key3 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]); - tcam_key->key_info_ipv6.dipv6_key4 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key5 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]); - tcam_key->key_info_ipv6.dipv6_key6 = - HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); - tcam_key->key_info_ipv6.dipv6_key7 = - HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]); -} - static void hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, struct hinic3_fdir_filter *rule, @@ -370,11 +324,15 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto; tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info_ipv6.tunnel_type = rule->tunnel_type; tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX; tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; + tcam_key->key_mask_ipv6.vlan_flag = HINIC3_UINT1_MAX; + + tcam_key->key_info_ipv6.vlan_flag = 0; tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX; tcam_key->key_info_ipv6.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -386,7 +344,7 @@ hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev, tcam_key->key_info_ipv6.sport = rule->key_spec.src_port; if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY) - hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key); + hinic3_fdir_ipv6_tcam_key_init(rule, tcam_key); } /** @@ -448,9 +406,11 @@ hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev, HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX; - tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN; + tcam_key->key_info.tunnel_type = rule->tunnel_type; + tcam_key->key_mask.vlan_flag = 1; tcam_key->key_mask.function_id = HINIC3_UINT15_MAX; + tcam_key->key_info.vlan_flag = 0; tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX; @@ -479,6 +439,258 @@ hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev, tcam_key_calculate(tcam_key, fdir_tcam_rule); } +static void +hinic3_fdir_tcam_key_set_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_sip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0]); + tcam_key->key_mask_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x1]); + tcam_key->key_mask_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x2]); + tcam_key->key_mask_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_mask_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0]); + tcam_key->key_info_ipv6_htn.sipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x1]); + tcam_key->key_info_ipv6_htn.sipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x2]); + tcam_key->key_info_ipv6_htn.sipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->src_ip[0x3]); + tcam_key->key_info_ipv6_htn.sipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->src_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_ipv6_dip(struct rte_eth_ipv6_flow *ipv6_mask, + struct rte_eth_ipv6_flow *ipv6_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0]); + tcam_key->key_mask_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x1]); + tcam_key->key_mask_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x2]); + tcam_key->key_mask_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_mask_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_mask->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key0 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key1 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0]); + tcam_key->key_info_ipv6_htn.dipv6_key2 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key3 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x1]); + tcam_key->key_info_ipv6_htn.dipv6_key4 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key5 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x2]); + tcam_key->key_info_ipv6_htn.dipv6_key6 = + HINIC3_32_UPPER_16_BITS(ipv6_spec->dst_ip[0x3]); + tcam_key->key_info_ipv6_htn.dipv6_key7 = + HINIC3_32_LOWER_16_BITS(ipv6_spec->dst_ip[0x3]); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(struct rte_eth_ipv4_flow *ipv4_mask, + struct rte_eth_ipv4_flow *ipv4_spec, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_mask_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->src_ip); + tcam_key->key_info_htn.outer_sipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->src_ip); + tcam_key->key_info_htn.outer_sipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->src_ip); + + tcam_key->key_mask_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_mask_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_mask->dst_ip); + tcam_key->key_info_htn.outer_dipv4_h = + HINIC3_32_UPPER_16_BITS(ipv4_spec->dst_ip); + tcam_key->key_info_htn.outer_dipv4_l = + HINIC3_32_LOWER_16_BITS(ipv4_spec->dst_ip); +} + +static void +hinic3_fdir_tcam_key_set_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void hinic3_fdir_tcam_key_set_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_sip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.ipv6, + &rule->key_spec.ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_notunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL; + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_ipv6_info(rule, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_outer_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_ipv6_htn.outer_ip_type = HINIC3_UINT1_MAX; + tcam_key->key_info_ipv6_htn.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_outer_ipv4_sip_dip(&rule->key_mask.ipv4, + &rule->key_spec.ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv4_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV4; + + hinic3_fdir_tcam_key_set_ipv4_sip_dip(&rule->key_mask.inner_ipv4, + &rule->key_spec.inner_ipv4, tcam_key); +} + +static void +hinic3_fdir_tcam_key_set_inner_ipv6_info(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_vxlan_ipv6_htn.ip_type = HINIC3_UINT2_MAX; + tcam_key->key_info_vxlan_ipv6_htn.ip_type = HINIC3_FDIR_IP_TYPE_IPV6; + + hinic3_fdir_tcam_key_set_ipv6_dip(&rule->key_mask.inner_ipv6, + &rule->key_spec.inner_ipv6, tcam_key); +} + +static void +hinic3_fdir_tcam_tunnel_htn_init(struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key) +{ + tcam_key->key_mask_htn.tunnel_type = HINIC3_UINT3_MAX; + tcam_key->key_info_htn.tunnel_type = rule->tunnel_type; + + tcam_key->key_mask_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_mask_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_h = + HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id); + tcam_key->key_info_htn.vni_l = + HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id); + + if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_outer_ipv4_info(rule, tcam_key); + + if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4) + hinic3_fdir_tcam_key_set_inner_ipv4_info(rule, tcam_key); + else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6) + hinic3_fdir_tcam_key_set_inner_ipv6_info(rule, tcam_key); +} + +void +hinic3_fdir_tcam_info_htn_init(struct rte_eth_dev *dev, + struct hinic3_fdir_filter *rule, + struct hinic3_tcam_key *tcam_key, + struct hinic3_tcam_cfg_rule *fdir_tcam_rule) +{ + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); + + tcam_key->key_mask_htn.function_id_h = HINIC3_UINT5_MAX; + tcam_key->key_mask_htn.function_id_l = HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_l = hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT5_MAX; + tcam_key->key_info_htn.function_id_h = + (hinic3_global_func_id(nic_dev->hwdev) >> HINIC3_UINT5_WIDTH) & HINIC3_UINT5_MAX; + + tcam_key->key_mask_htn.ip_proto = rule->key_mask.proto; + tcam_key->key_info_htn.ip_proto = rule->key_spec.proto; + + tcam_key->key_mask_htn.sport = rule->key_mask.src_port; + tcam_key->key_info_htn.sport = rule->key_spec.src_port; + + tcam_key->key_mask_htn.dport = rule->key_mask.dst_port; + tcam_key->key_info_htn.dport = rule->key_spec.dst_port; + if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL) + hinic3_fdir_tcam_notunnel_htn_init(rule, tcam_key); + else + hinic3_fdir_tcam_tunnel_htn_init(rule, tcam_key); + + fdir_tcam_rule->data.qid = rule->rq_index; + + tcam_key_calculate(tcam_key, fdir_tcam_rule); +} + /** * Find filter in given ethertype filter list. * @@ -513,19 +725,30 @@ hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_li * Point to the tcam filter list. * @param[in] key * The tcam key to find. + * @param[in] action_type + * The type of action. + * @param[in] tcam_index + * The index of tcam. * @return * If a matching filter is found, the filter is returned, otherwise NULL. */ static inline struct hinic3_tcam_filter * hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list, - struct hinic3_tcam_key *key) + struct hinic3_tcam_key *key, + uint8_t action_type, uint16_t tcam_index) { struct hinic3_tcam_filter *it; - TAILQ_FOREACH(it, filter_list, entries) { - if (memcmp(key, &it->tcam_key, - sizeof(struct hinic3_tcam_key)) == 0) { - return it; + if (action_type == HINIC3_ACTION_ADD) { + TAILQ_FOREACH (it, filter_list, entries) { + if (memcmp(key, &it->tcam_key, sizeof(struct hinic3_tcam_key)) == 0) + return it; + } + } else { + TAILQ_FOREACH(it, filter_list, entries) { + if ((it->index + HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(it->dynamic_block_id)) + == tcam_index) + return it; } } @@ -588,12 +811,8 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, * * @param[in] dev * Pointer to ethernet device structure. - * @param[in] fdir_tcam_rule - * Indicate the filtering rule to be searched for. * @param[in] tcam_info * Ternary Content-Addressable Memory (TCAM) information. - * @param[in] tcam_filter - * Point to the TCAM filter. * @param[out] tcam_index * Indicate the TCAM index to be searched for. * @result @@ -601,9 +820,7 @@ hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info, */ static struct hinic3_tcam_dynamic_block * hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, - struct hinic3_tcam_cfg_rule *fdir_tcam_rule, struct hinic3_tcam_info *tcam_info, - struct hinic3_tcam_filter *tcam_filter, uint16_t *tcam_index) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -616,6 +833,9 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, uint16_t index; int err; + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) != 0) + rule_nums += nic_dev->ethertype_rule_nums; + /* * Check whether the number of filtering rules reaches the maximum * capacity of dynamic TCAM blocks. @@ -662,8 +882,7 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, if (tmp == NULL || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "Fdir filter dynamic lookup for index failed!"); + PMD_DRV_LOG(ERR, "Fdir filter dynamic lookup for index failed!"); goto look_up_failed; } @@ -674,20 +893,13 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, /* Find the first free position. */ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) { - PMD_DRV_LOG(ERR, - "tcam block 0x%x supports filter rules is full!", + PMD_DRV_LOG(ERR, "tcam block 0x%x supports filter rules is full!", tmp->dynamic_block_id); goto look_up_failed; } - tcam_filter->dynamic_block_id = tmp->dynamic_block_id; - tcam_filter->index = index; *tcam_index = index; - fdir_tcam_rule->index = - HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) + - index; - return tmp; look_up_failed: @@ -702,6 +914,51 @@ hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev, return NULL; } +uint16_t +hinic3_tcam_alloc_index(struct hinic3_nic_dev nic_dev, uint16_t *block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + uint16_t index = 0; + + tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev, tcam_info, &index); + if (tmp == NULL) { + PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); + return HINIC3_TCAM_INVALID_INDEX; + } + + tmp->dynamic_index[index] = 1; + tmp->dynamic_index_cnt++; + + *block_id = tmp->dynamic_block_id; + + return index; +} + +void +hinic3_tcam_index_free(struct hinic3_nic_dev nic_dev, uint16_t index, uint16_t *block_id) +{ + struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev); + struct hinic3_tcam_dynamic_block *tmp = NULL; + + TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list, entries) { + if (tmp->dynamic_block_id == block_id) + break; + } + + if (tmp == NULL || tmp->dynamic_block_id != block_id) { + PMD_DRV_LOG(ERR, "Fdir filter del dynamic lookup for block failed!"); + return; + } + + tmp->dynamic_index[index] = 0; + tmp->dynamic_index_cnt--; + if (tmp->dynamic_index_cnt ==0) { + hinic3_free_tcam_block(nic_dev->hwdev, &block_id); + hinic3_free_dynamic_block_resource(tcam_info, tmp); + } +} + /** * Add a TCAM filter. * @@ -722,11 +979,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL; - struct hinic3_tcam_dynamic_block *tmp = NULL; struct hinic3_tcam_filter *tcam_filter; - uint16_t tcam_block_index = 0; - uint16_t index = 0; int err; /* Alloc TCAM filter memory. */ @@ -737,35 +990,11 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, tcam_filter->tcam_key = *tcam_key; tcam_filter->queue = (uint16_t)(fdir_tcam_rule->data.qid); - - /* Add new TCAM rules. */ - if (nic_dev->tcam_rule_nums == 0) { - err = hinic3_alloc_tcam_block(nic_dev->hwdev, &tcam_block_index); - if (err) { - PMD_DRV_LOG(ERR, - "Fdir filter tcam alloc block failed!"); - goto failed; - } - - dynamic_block_ptr = - hinic3_alloc_dynamic_block_resource(tcam_info, - tcam_block_index); - if (dynamic_block_ptr == NULL) { - PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first block memory failed!"); - goto alloc_block_failed; - } - } - - /* - * Look for an available index in the dynamic block to store the new - * TCAM filter. - */ - tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info, - tcam_filter, &index); - if (tmp == NULL) { - PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!"); - goto lookup_tcam_index_failed; - } + tcam_filter->index = hinic3_tcam_alloc_index(nic_dev, &tcam_filter->dynamic_block_id); + if (tcam_filter->index == HINIC3_TCAM_INVALID_INDEX) + goto failed; + fdir_tcam_rule->index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tcam_filter->dynamic_block_id) + + tcam_filter->index; /* Add a new TCAM rule to the network device. */ err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule, @@ -785,10 +1014,6 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, /* Add a filter to the end of the queue. */ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries); - /* Update dynamic index. */ - tmp->dynamic_index[index] = 1; - tmp->dynamic_index_cnt++; - nic_dev->tcam_rule_nums++; PMD_DRV_LOG(INFO, @@ -796,7 +1021,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, hinic3_global_func_id(nic_dev->hwdev)); PMD_DRV_LOG(INFO, "tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d", - tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index, + tcam_filter->dynamic_block_id, tcam_filter->index, fdir_tcam_rule->index, fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums); return 0; @@ -807,13 +1032,7 @@ hinic3_add_tcam_filter(struct rte_eth_dev *dev, add_tcam_rules_failed: lookup_tcam_index_failed: - if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL) - hinic3_free_dynamic_block_resource(tcam_info, - dynamic_block_ptr); - -alloc_block_failed: - if (nic_dev->tcam_rule_nums == 0) - hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tcam_filter->dynamic_block_id); failed: rte_free(tcam_filter); @@ -873,14 +1092,9 @@ hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev, dynamic_block_id, tcam_filter->index, index, tmp->dynamic_index_cnt - 1, nic_dev->tcam_rule_nums - 1); - tmp->dynamic_index[tcam_filter->index] = 0; - tmp->dynamic_index_cnt--; - nic_dev->tcam_rule_nums--; - if (tmp->dynamic_index_cnt == 0) { - hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id); + hinic3_tcam_index_free(nic_dev, tcam_filter->index, tmp->dynamic_block_id); - hinic3_free_dynamic_block_resource(tcam_info, tmp); - } + nic_dev->tcam_rule_nums--; /* If the number of rules is 0, the TCAM filter is disabled. */ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) @@ -930,6 +1144,7 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add) { + struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); struct hinic3_tcam_info *tcam_info = HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private); struct hinic3_tcam_filter *tcam_filter; @@ -940,11 +1155,15 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule)); memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key)); - hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, - &fdir_tcam_rule); + if ((hinic3_get_driver_feature(nic_dev) & NIC_F_HTN_FDIR) == 0) + hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + else + hinic3_fdir_tcam_info_htn_init(dev, fdir_filter, &tcam_key, &fdir_tcam_rule); + /* Search for a filter. */ tcam_filter = - hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key); + hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_ADD, HINIC3_INVALID_INDEX); if (tcam_filter != NULL && add) { PMD_DRV_LOG(ERR, "Filter exists."); return -EEXIST; @@ -965,6 +1184,12 @@ hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, fdir_filter->tcam_index = (int)(fdir_tcam_rule.index); } else { + tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key, + HINIC3_ACTION_NOT_ADD, fdir_filter->tcam_index); + if (tcam_filter == NULL) { + PMD_DRV_LOG(ERR, "Filter doesn't exist."); + return -ENOENT; + } PMD_DRV_LOG(INFO, "begin to del tcam filter"); ret = hinic3_del_tcam_filter(dev, tcam_filter); if (ret) @@ -1088,7 +1313,7 @@ hinic3_free_fdir_filter(struct rte_eth_dev *dev) static int hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1097,7 +1322,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1107,7 +1332,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Request Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1117,7 +1342,7 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, /* Setting the ARP Response Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1129,19 +1354,19 @@ hinic3_flow_set_arp_filter(struct rte_eth_dev *dev, set_arp_rep_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP_REQ, - ethertype_filter->queue, !add); + ethertype_filter, !add); set_arp_req_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_ARP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1150,7 +1375,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the LACP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1160,7 +1385,7 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, /* Setting the OAM Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_OAM, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1172,14 +1397,14 @@ hinic3_flow_set_slow_filter(struct rte_eth_dev *dev, set_arp_oam_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LACP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1188,7 +1413,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the LLDP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1198,7 +1423,7 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, /* Setting the CDCP Filter. */ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_CDCP, - ethertype_filter->queue, add); + ethertype_filter, add); if (ret) { PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d", add ? "Add" : "Del", ret); @@ -1210,14 +1435,14 @@ hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev, set_arp_cdcp_failed: hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, HINIC3_PKT_TYPE_LLDP, - ethertype_filter->queue, !add); + ethertype_filter, !add); return ret; } static int hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); @@ -1245,7 +1470,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, return hinic3_flow_set_arp_filter(dev, ethertype_filter, add); case RTE_ETHER_TYPE_RARP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_RARP, ethertype_filter, add); case RTE_ETHER_TYPE_SLOW: return hinic3_flow_set_slow_filter(dev, ethertype_filter, add); @@ -1255,11 +1480,11 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, case RTE_ETHER_TYPE_CNM: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_CNM, ethertype_filter, add); case RTE_ETHER_TYPE_ECP: return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev, - HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add); + HINIC3_PKT_TYPE_ECP, ethertype_filter, add); default: PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d", @@ -1270,7 +1495,7 @@ hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev, } static int -hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter) +hinic3_flow_ethertype_rule_nums(struct hinic3_ethertype_filter *ethertype_filter) { switch (ethertype_filter->ether_type) { case RTE_ETHER_TYPE_ARP: @@ -1309,7 +1534,7 @@ hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filte */ int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add) { /* Get dev private info. */ diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h index 8659f588d9..6522950e13 100644 --- a/drivers/net/hinic3/hinic3_fdir.h +++ b/drivers/net/hinic3/hinic3_fdir.h @@ -14,9 +14,39 @@ #define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index)) +#define HINIC3_TCAM_GET_DYNAMIC_BLOCK_INDEX(index) \ + ((index) / HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_GET_INDEX_IN_BLOCK(index) \ + ((index) % HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) + +#define HINIC3_TCAM_INVALID_INDEX 0xFFFF + +enum hinic3_ether_type { + HINIC3_PKT_TYPE_ARP = 1, + HINIC3_PKT_TYPE_ARP_REQ, + HINIC3_PKT_TYPE_ARP_REP, + HINIC3_PKT_TYPE_RARP, + HINIC3_PKT_TYPE_LACP, + HINIC3_PKT_TYPE_LLDP, + HINIC3_PKT_TYPE_OAM, + HINIC3_PKT_TYPE_CDCP, + HINIC3_PKT_TYPE_CNM, + HINIC3_PKT_TYPE_ECP = 10, + HINIC3_PKT_TYPE_BUTT, + + HINIC3_PKT_UNKNOWN = 31, +}; + +enum hinic3_rule_type { + HINIC3_RULE_TYPE_FILTER, + HINIC3_RULE_TYPE_PFE, +}; + /* Indicate a traffic filtering rule. */ struct rte_flow { TAILQ_ENTRY(rte_flow) node; + enum hinic3_rule_type rule_type; enum rte_filter_type filter_type; void *rule; }; @@ -30,6 +60,8 @@ struct hinic3_fdir_rule_key { uint16_t src_port; uint16_t dst_port; uint8_t proto; + uint8_t vlan_flag; + uint16_t ether_type; }; struct hinic3_fdir_filter { @@ -42,17 +74,34 @@ struct hinic3_fdir_filter { uint32_t rq_index; /**< Queue assigned when matched. */ }; +struct hinic3_ethertype_filter { + int tcam_index[HINIC3_PKT_TYPE_BUTT]; + uint16_t ether_type; /**< Ether type to match */ + uint16_t queue; /**< Queue assigned to when match*/ +}; + /* This structure is used to describe a basic filter type. */ struct hinic3_filter_t { uint16_t filter_rule_nums; enum rte_filter_type filter_type; - struct rte_eth_ethertype_filter ethertype_filter; + struct hinic3_ethertype_filter ethertype_filter; struct hinic3_fdir_filter fdir_filter; }; +enum hinic3_action_type { + HINIC3_ACTION_ADD, + HINIC3_ACTION_NOT_ADD, +}; + enum hinic3_fdir_tunnel_mode { HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0, - HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1, + HINIC3_FDIR_TUNNEL_MODE_NVGRE = 2, + HINIC3_FDIR_TUNNEL_MODE_FC = 3, + HINIC3_FDIR_TUNNEL_MODE_GPE = 4, + HINIC3_FDIR_TUNNEL_MODE_GENEVE = 5, + HINIC3_FDIR_TUNNEL_MODE_NSH = 6, + HINIC3_FDIR_TUNNEL_MODE_IPIP = 7, }; enum hinic3_fdir_ip_type { @@ -61,7 +110,6 @@ enum hinic3_fdir_ip_type { HINIC3_FDIR_IP_TYPE_ANY = 2, }; -/* Describe the key structure of the TCAM. */ struct hinic3_tcam_key_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -69,32 +117,34 @@ struct hinic3_tcam_key_mem { uint32_t tunnel_type : 4; uint32_t rsvd1 : 4; + uint32_t function_id : 15; uint32_t ip_type : 1; - uint32_t sipv4_h : 16; - uint32_t sipv4_l : 16; + uint32_t sipv4_l : 16; uint32_t dipv4_h : 16; + uint32_t dipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t dport : 16; uint32_t sport : 16; uint32_t rsvd5 : 16; - uint32_t rsvd6 : 16; + uint32_t outer_sipv4_h : 16; uint32_t outer_sipv4_l : 16; uint32_t outer_dipv4_h : 16; uint32_t outer_dipv4_l : 16; - uint32_t vni_h : 16; + uint32_t vni_h : 16; uint32_t vni_l : 16; uint32_t rsvd7 : 16; #else @@ -110,13 +160,14 @@ struct hinic3_tcam_key_mem { uint32_t dipv4_h : 16; uint32_t sipv4_l : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t dipv4_l : 16; uint32_t rsvd3; uint32_t dport : 16; - uint32_t rsvd4 : 16; + uint32_t ether_type : 16; uint32_t rsvd5 : 16; uint32_t sport : 16; @@ -135,18 +186,90 @@ struct hinic3_tcam_key_mem { #endif }; -/* - * Define the IPv6-related TCAM key data structure in common - * scenarios or IPv6 tunnel scenarios. - */ +struct hinic3_tcam_key_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h: 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t sipv4_h : 16; + + uint32_t sipv4_l : 16; + uint32_t rsvd5 : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t rsvd8 : 16; + uint32_t dipv4_h : 16; + + uint32_t dipv4_l : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd5 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t sipv4_h : 16; + uint32_t vni_l : 16; + + uint32_t rsvd5 : 16; + uint32_t sipv4_l : 16; + + uint32_t rsvd6; + uint32_t rsvd7; + + uint32_t dipv4_h : 16; + uint32_t rsvd8 : 16; + + uint32_t sport : 16; + uint32_t dipv4_l :16; + + uint32_t rsvd9 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; - /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */ uint32_t ip_proto : 8; uint32_t tunnel_type : 4; uint32_t outer_ip_type : 1; - uint32_t rsvd1 : 3; + uint32_t vlan_flag : 1; + uint32_t rsvd1 : 2; uint32_t function_id : 15; uint32_t ip_type : 1; @@ -179,7 +302,9 @@ struct hinic3_tcam_key_ipv6_mem { uint32_t dipv6_key7 : 16; uint32_t rsvd2 : 16; #else - uint32_t rsvd1 : 3; + uint32_t rsvd1 : 2; + uint32_t vlan_flag : 1; + uint32_t outer_ip_type : 1; uint32_t tunnel_type : 4; uint32_t ip_proto : 8; @@ -218,10 +343,86 @@ struct hinic3_tcam_key_ipv6_mem { #endif }; -/* - * Define the tcam key value data structure related to IPv6 in - * the VXLAN scenario. - */ +struct hinic3_tcam_key_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t sipv6_key0 : 16; + + uint32_t sipv6_key1 : 16; + uint32_t sipv6_key2 : 16; + + uint32_t sipv6_key3 : 16; + uint32_t sipv6_key4 : 16; + + uint32_t sipv6_key5 : 16; + uint32_t sipv6_key6 : 16; + + uint32_t sipv6_key7 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t sipv6_key0 : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t sipv6_key2 : 16; + uint32_t sipv6_key1 : 16; + + uint32_t sipv6_key4 : 16; + uint32_t sipv6_key3 : 16; + + uint32_t sipv6_key6 : 16; + uint32_t sipv6_key5 : 16; + + uint32_t dipv6_key0 : 16; + uint32_t sipv6_key7 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd2 : 16; + uint32_t dport : 16; +#endif +}; + struct hinic3_tcam_key_vxlan_ipv6_mem { #if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) uint32_t rsvd0 : 16; @@ -246,7 +447,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t sport : 16; - uint32_t rsvd2 : 16; + uint32_t vlan_flag : 1; + uint32_t rsvd2 : 15; uint32_t rsvd3 : 16; uint32_t outer_sipv4_h : 16; @@ -281,7 +483,8 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { uint32_t dport : 16; uint32_t dipv6_key7 : 16; - uint32_t rsvd2 : 16; + uint32_t rsvd2 : 15; + uint32_t vlan_flag : 1; uint32_t sport : 16; uint32_t outer_sipv4_h : 16; @@ -298,6 +501,88 @@ struct hinic3_tcam_key_vxlan_ipv6_mem { #endif }; +struct hinic3_tcam_key_vxlan_ipv6_mem_htn { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t rsvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 3; + uint32_t function_id_h : 5; + + uint32_t function_id_l : 5; + uint32_t ip_type : 2; + uint32_t outer_ip_type : 1; + uint32_t rsvd1 : 8; + uint32_t outer_sipv4_h : 16; + + uint32_t outer_sipv4_l : 16; + uint32_t outer_dipv4_h : 16; + + uint32_t outer_dipv4_l : 16; + uint32_t rsvd2 : 8; + uint32_t vni_h : 8; + + uint32_t vni_l : 16; + uint32_t rsvd3 : 16; + + uint32_t rsvd4 : 16; + uint32_t dipv6_key0 : 16; + + uint32_t dipv6_key1 : 16; + uint32_t dipv6_key2 : 16; + + uint32_t dipv6_key3 : 16; + uint32_t dipv6_key4 : 16; + + uint32_t dipv6_key5 : 16; + uint32_t dipv6_key6 : 16; + + uint32_t dipv6_key7 : 16; + uint32_t sport : 16; + + uint32_t dport : 16; + uint32_t rsvd2 : 16; +#else + uint32_t function_id_h : 5; + uint32_t tunnel_type : 3; + uint32_t ip_proto : 8; + uint32_t rsvd0 : 16; + + uint32_t outer_sipv4_h : 16; + uint32_t rsvd1 : 8; + uint32_t outer_ip_type : 1; + uint32_t ip_type : 2; + uint32_t function_id_l : 5; + + uint32_t outer_dipv4_h : 16; + uint32_t outer_sipv4_l : 16; + + uint32_t vni_h : 8; + uint32_t rsvd2 : 8; + uint32_t outer_dipv4_l : 16; + + uint32_t rsvd3 : 16; + uint32_t vni_l : 16; + + uint32_t dipv6_key0 : 16; + uint32_t rsvd4 : 16; + + uint32_t dipv6_key2 : 16; + uint32_t dipv6_key1 : 16; + + uint32_t dipv6_key4 : 16; + uint32_t dipv6_key3 : 16; + + uint32_t dipv6_key6 : 16; + uint32_t dipv6_key5 : 16; + + uint32_t sport : 16; + uint32_t dipv6_key7 : 16; + + uint32_t rsvd5 : 16; + uint32_t dport : 16; +#endif +}; + /* * TCAM key structure. The two unions indicate the key and mask respectively. * The TCAM key is consistent with the TCAM entry. @@ -307,18 +592,25 @@ struct hinic3_tcam_key { struct hinic3_tcam_key_mem key_info; struct hinic3_tcam_key_ipv6_mem key_info_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_info_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_info_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_info_vxlan_ipv6_htn; }; union { struct hinic3_tcam_key_mem key_mask; struct hinic3_tcam_key_ipv6_mem key_mask_ipv6; struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6; + + struct hinic3_tcam_key_mem_htn key_mask_htn; + struct hinic3_tcam_key_ipv6_mem_htn key_mask_ipv6_htn; + struct hinic3_tcam_key_vxlan_ipv6_mem_htn key_mask_vxlan_ipv6_htn; }; }; /* Structure indicates the TCAM filter. */ struct hinic3_tcam_filter { - TAILQ_ENTRY(hinic3_tcam_filter) - entries; /**< Filter entry, used for linked list operations. */ + TAILQ_ENTRY(hinic3_tcam_filter) entries; /**< Filter entry, used for linked list operations. */ uint16_t dynamic_block_id; /**< Dynamic block ID. */ uint16_t index; /**< TCAM index. */ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */ @@ -362,37 +654,30 @@ struct hinic3_tcam_info { #define HINIC3_CNM_RULE_NUM 1 #define HINIC3_ECP_RULE_NUM 2 +#define HINIC3_UINT1_MAX 0x1 +#define HINIC3_UINT2_MAX 0x3 +#define HINIC3_UINT3_MAX 0x7 +#define HINIC3_UINT4_MAX 0xf +#define HINIC3_UINT5_WIDTH 0x5 +#define HINIC3_UINT5_MAX 0x1f +#define HINIC3_UINT15_MAX 0x7fff + /* Define Ethernet type. */ #define RTE_ETHER_TYPE_CNM 0x22e7 #define RTE_ETHER_TYPE_ECP 0x8940 -/* Protocol type of the data packet. */ -enum hinic3_ether_type { - HINIC3_PKT_TYPE_ARP = 1, - HINIC3_PKT_TYPE_ARP_REQ, - HINIC3_PKT_TYPE_ARP_REP, - HINIC3_PKT_TYPE_RARP, - HINIC3_PKT_TYPE_LACP, - HINIC3_PKT_TYPE_LLDP, - HINIC3_PKT_TYPE_OAM, - HINIC3_PKT_TYPE_CDCP, - HINIC3_PKT_TYPE_CNM, - HINIC3_PKT_TYPE_ECP = 10, - - HINIC3_PKT_UNKNOWN = 31, -}; - int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev, struct hinic3_fdir_filter *fdir_filter, bool add); int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev, - struct rte_eth_ethertype_filter *ethertype_filter, + struct hinic3_ethertype_filter *ethertype_filter, bool add); - void hinic3_free_fdir_filter(struct rte_eth_dev *dev); int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, uint32_t queue_id, uint32_t able); int hinic3_flow_parse_attr(const struct rte_flow_attr *attr, struct rte_flow_error *error); +uint16_t hinic3_tcam_alloc_index(struct hinic3_nic_dev nic_dev, uint16_t *block_id); +void hinic3_tcam_index_free(struct hinic3_nic_dev nic_dev, uint16_t index, uint16_t *block_id) #endif /**< _HINIC3_FDIR_H_ */ -- 2.45.1.windows.1 ^ permalink raw reply related [flat|nested] 80+ messages in thread
* [REVIEW] net/hinic3: use different callback func to support htnfdir 2026-01-31 10:06 ` [PATCH 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang @ 2026-01-31 18:17 ` Stephen Hemminger 2026-02-02 1:17 ` 回复: " wangfeifei (J) 0 siblings, 1 reply; 80+ messages in thread From: Stephen Hemminger @ 2026-01-31 18:17 UTC (permalink / raw) To: dev; +Cc: Stephen Hemminger AI-generated review of hinic3.mbox Reviewed using Claude (claude-opus-4-5-20251101) on 2026-01-31 This is an automated review. Please verify all suggestions. --- # DPDK Patch Review: net/hinic3 Series This is a review of a 7-patch series adding support for new SPx series NICs to the hinic3 driver. --- ## Overall Series Issues ### Errors 1. **Multiple patches have compilation issues** - Several patches introduce syntax errors, missing semicolons, and undefined symbols that would prevent compilation. 2. **Missing meson.build updates** - Patches 3-7 add new source directories (`htn_adapt/`, `stn_adapt/`) with their own `meson.build` files, but the main `drivers/net/hinic3/meson.build` is not updated to include these subdirectories. 3. **Inconsistent function naming** - The series introduces functions with inconsistent naming between patches (e.g., `hinic3_cmdq_get_stn_ops()` vs `hinic3_nic_cmdq_get_stn_ops()`). --- ## Patch 1/7: net/hinic3: add support for new SPx series NIC ### Errors 1. **Line 38 (hinic3_csr.h)**: Uppercase `X` in hex literal: ```c #define HINIC3_DEV_ID_SP230 0X0229 ``` Should be lowercase `0x0229`. 2. **Line 48 (hinic3_ethdev.c)**: Undefined symbol `HINIC3_DEV_ID_VF_SP920` - this macro is not defined in the header file, only `HINIC3_DEV_ID_920` is defined. ### Warnings 1. **Commit message**: "suuport" should be "support" (typo). 2. **Alignment inconsistencies** in `hinic3_cmd.h` - the alignment changes are inconsistent (some use tabs, some use spaces to different column positions). --- ## Patch 2/7: net/hinic3: add enhance cmdq support for new SPx series NIC ### Errors 1. **Line 85 (hinic3_cmdq_enhance.c)**: Missing `RTE_UNUSED` or parameter `nic_dev` is unused in `prepare_rss_indir_table_cmd_header`. 2. **Line 169 (hinic3_cmdq_enhance.h)**: Missing space before closing comment: ```c #endif /*_HINIC3_CMDQ_ENHANCE_H_ */ ``` Should be `/* _HINIC3_CMDQ_ENHANCE_H_ */`. 3. **Line 1 (meson.build)**: Missing comma in sources list: ```python base_sources = files( 'hinic3_cmdq_enhance.c' 'hinic3_cmdq.c', ``` Should have comma after first file. 4. **Multiple typedef changes** - Using `u8`, `u16`, `u32`, `u64` instead of standard `uint8_t`, `uint16_t`, etc. DPDK uses standard C types. ### Warnings 1. **Inconsistent indentation** in `cmdq_sync_cmd()` function - some blocks have incorrect indentation levels. 2. **Missing Doxygen documentation** for new public functions in `hinic3_cmdq_enhance.h`. --- ## Patch 3/7: net/hinic3: use different callback func to split new/old cmdq operations ### Errors 1. **Line 55-60 (hinic3_nic_io.h)**: Missing semicolons in struct definition: ```c struct hinic3_nic_cmdq_ops { prepare_cmd_buf_clean_tso_lro_space_t prepare_cmd_buf_clean_tso_lro_space prepare_cmd_buf_qp_context_multi_store_t prepare_cmd_buf_qp_context_multi_store ``` Each line needs a semicolon. 2. **Line 80-81 (hinic3_htn_cmdq.c)**: Function `prepare_rss_indir_table_cmd_header` is called before it's defined (defined at line 93). 3. **Line 38 (hinic3_stn_cmdq.h)**: Missing closing brace or semicolon at end of struct. 4. **Undefined types** - `u32`, `u16`, `u8` used instead of `uint32_t`, `uint16_t`, `uint8_t`. ### Warnings 1. **Missing header guards** check - `hinic3_htn_cmdq.h` and `hinic3_stn_cmdq.h` header guards don't match standard format. --- ## Patch 4/7: net/hinic3: add fun init ops to support Compact CQE ### Errors 1. **Line 614 (hinic3_nic_io.c)**: Unterminated string literal: ```c PMD_DRV_LOG(ERR, "Set rq cqe context failed, qid: %d, err: %d, status: 0x%x, out_size: 0x%x", ``` String cannot span lines like this. 2. **Line 666 (hinic3_nic_io.c)**: Redefinition of `nic_dev`: ```c hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool enable) { struct hinic3_nic_dev *nic_dev = NULL; ``` 3. **Line 669 (hinic3_nic_io.c)**: Undefined variable `dev` used: ```c if (!dev) return -EINVAL; ``` 4. **Missing function declaration** - `hinic3_nic_tx_rx_ops_init` is called but not declared. ### Warnings 1. **Commit message**: "fun init" should be "func init" or "function init". 2. **Signed-off-by has leading space** in commit message. --- ## Patch 5/7: net/hinic3: add rx ops to support Compact CQE ### Errors 1. **Line 476 (hinic3_rx.h)**: Missing comma in function declaration: ```c hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct hinic3_rq_cqe *rx_cqe struct hinic3_cqe_info *cqe_info); ``` 2. **Line 790 (hinic3_rx.c)**: `if` statement without space: ```c if(err) { ``` Should be `if (err)`. 3. **Undefined macro** `HINIC3_PKT_TX_TUNNEL_GENEVE` and `HINIC3_PKT_TX_TUNNEL_IPIP` used but not defined. ### Warnings 1. **Inconsistent brace style** - some functions use opening brace on same line, others on new line. --- ## Patch 6/7: net/hinic3: add tx ops to support Compact CQE ### Errors 1. **Line 379 (hinic3_tx.c)**: Undefined variable `wqe`: ```c wqe->task->pkt_info0 = ``` Should be `task->pkt_info0`. 2. **Line 606 (hinic3_tx.c)**: Typo in function name: ```c return inic3_non_tso_pkt_pre_process(mbuf, wqe_info); ``` Should be `hinic3_non_tso_pkt_pre_process`. 3. **Line 328 (hinic3_tx.h)**: Duplicate enum value: ```c enum sq_wqe_data_format { SQ_NORMAL_WQE = 0, ``` `SQ_NORMAL_WQE` is already defined in `enum sq_wqe_type`. ### Warnings 1. **Missing `RTE_MBUF_F_TX_OUTER_UDP_CKSUM`** - `HINIC3_PKT_TX_OUTER_UDP_CKSUM` is used but the corresponding RTE flag mapping is missing. --- ## Patch 7/7: net/hinic3: use different callback func to support htn fdir ### Errors 1. **Line 926 (hinic3_fdir.c)**: Wrong parameter type: ```c hinic3_tcam_alloc_index(struct hinic3_nic_dev nic_dev, uint16_t *block_id) ``` Should be `struct hinic3_nic_dev *nic_dev` (pointer). 2. **Line 942 (hinic3_fdir.c)**: Same issue - missing pointer for `nic_dev` parameter. 3. **Line 656 (hinic3_fdir.h)**: Missing semicolon at end of function declaration: ```c void hinic3_tcam_index_free(struct hinic3_nic_dev nic_dev, uint16_t index, uint16_t *block_id) ``` 4. **Line 750 (hinic3_fdir.c)**: Missing closing brace: ```c if (index == HINIC3_TCAM_INVALID_INDEX) { return -ENOMEM; } ``` The outer `if` block is not closed properly. ### Warnings 1. **Inconsistent parameter types** - Some functions use `struct hinic3_nic_dev` while others use `struct hinic3_nic_dev *`. --- ## Summary This patch series has significant issues that need to be addressed before it can be accepted: 1. **Compilation errors** - Multiple syntax errors prevent building 2. **Undefined symbols** - References to macros and functions that don't exist 3. **Type inconsistencies** - Mix of kernel-style types (`u32`) and standard types (`uint32_t`) 4. **Missing build system updates** - New directories not integrated into meson build 5. **Documentation gaps** - New APIs lack Doxygen comments The patches should be thoroughly tested for compilation before resubmission. ^ permalink raw reply [flat|nested] 80+ messages in thread
* 回复: [REVIEW] net/hinic3: use different callback func to support htnfdir 2026-01-31 18:17 ` [REVIEW] net/hinic3: use different callback func to support htnfdir Stephen Hemminger @ 2026-02-02 1:17 ` wangfeifei (J) 0 siblings, 0 replies; 80+ messages in thread From: wangfeifei (J) @ 2026-02-02 1:17 UTC (permalink / raw) To: Stephen Hemminger, dev@dpdk.org Thanks for your reviewing Stephen. Sorry for that we upload this patch series which issues have been addressed. Currently, we have superseded this patch series. And after fix these issues, we will updating a new version. > -----邮件原件----- > 发件人: Stephen Hemminger <stephen@networkplumber.org> > 发送时间: 2026年2月1日 2:18 > 收件人: dev@dpdk.org > 抄送: Stephen Hemminger <stephen@networkplumber.org> > 主题: [REVIEW] net/hinic3: use different callback func to support htnfdir > > AI-generated review of hinic3.mbox > Reviewed using Claude (claude-opus-4-5-20251101) on 2026-01-31 > > This is an automated review. Please verify all suggestions. > > --- > > # DPDK Patch Review: net/hinic3 Series > > This is a review of a 7-patch series adding support for new SPx series NICs to the > hinic3 driver. > > --- > > ## Overall Series Issues > > ### Errors > > 1. **Multiple patches have compilation issues** - Several patches introduce syntax > errors, missing semicolons, and undefined symbols that would prevent > compilation. > > 2. **Missing meson.build updates** - Patches 3-7 add new source directories > (`htn_adapt/`, `stn_adapt/`) with their own `meson.build` files, but the main > `drivers/net/hinic3/meson.build` is not updated to include these subdirectories. > > 3. **Inconsistent function naming** - The series introduces functions with > inconsistent naming between patches (e.g., `hinic3_cmdq_get_stn_ops()` vs > `hinic3_nic_cmdq_get_stn_ops()`). > > --- > > ## Patch 1/7: net/hinic3: add support for new SPx series NIC > > ### Errors > > 1. **Line 38 (hinic3_csr.h)**: Uppercase `X` in hex literal: > ```c > #define HINIC3_DEV_ID_SP230 0X0229 > ``` > Should be lowercase `0x0229`. > > 2. **Line 48 (hinic3_ethdev.c)**: Undefined symbol `HINIC3_DEV_ID_VF_SP920` - > this macro is not defined in the header file, only `HINIC3_DEV_ID_920` is defined. > > ### Warnings > > 1. **Commit message**: "suuport" should be "support" (typo). > > 2. **Alignment inconsistencies** in `hinic3_cmd.h` - the alignment changes are > inconsistent (some use tabs, some use spaces to different column positions). > > --- > > ## Patch 2/7: net/hinic3: add enhance cmdq support for new SPx series NIC > > ### Errors > > 1. **Line 85 (hinic3_cmdq_enhance.c)**: Missing `RTE_UNUSED` or parameter > `nic_dev` is unused in `prepare_rss_indir_table_cmd_header`. > > 2. **Line 169 (hinic3_cmdq_enhance.h)**: Missing space before closing comment: > ```c > #endif /*_HINIC3_CMDQ_ENHANCE_H_ */ > ``` > Should be `/* _HINIC3_CMDQ_ENHANCE_H_ */`. > > 3. **Line 1 (meson.build)**: Missing comma in sources list: > ```python > base_sources = files( > 'hinic3_cmdq_enhance.c' > 'hinic3_cmdq.c', > ``` > Should have comma after first file. > > 4. **Multiple typedef changes** - Using `u8`, `u16`, `u32`, `u64` instead of > standard `uint8_t`, `uint16_t`, etc. DPDK uses standard C types. > > ### Warnings > > 1. **Inconsistent indentation** in `cmdq_sync_cmd()` function - some blocks > have incorrect indentation levels. > > 2. **Missing Doxygen documentation** for new public functions in > `hinic3_cmdq_enhance.h`. > > --- > > ## Patch 3/7: net/hinic3: use different callback func to split new/old cmdq > operations > > ### Errors > > 1. **Line 55-60 (hinic3_nic_io.h)**: Missing semicolons in struct definition: > ```c > struct hinic3_nic_cmdq_ops { > prepare_cmd_buf_clean_tso_lro_space_t > prepare_cmd_buf_clean_tso_lro_space > prepare_cmd_buf_qp_context_multi_store_t > prepare_cmd_buf_qp_context_multi_store > ``` > Each line needs a semicolon. > > 2. **Line 80-81 (hinic3_htn_cmdq.c)**: Function > `prepare_rss_indir_table_cmd_header` is called before it's defined (defined at line > 93). > > 3. **Line 38 (hinic3_stn_cmdq.h)**: Missing closing brace or semicolon at end of > struct. > > 4. **Undefined types** - `u32`, `u16`, `u8` used instead of `uint32_t`, `uint16_t`, > `uint8_t`. > > ### Warnings > > 1. **Missing header guards** check - `hinic3_htn_cmdq.h` and > `hinic3_stn_cmdq.h` header guards don't match standard format. > > --- > > ## Patch 4/7: net/hinic3: add fun init ops to support Compact CQE > > ### Errors > > 1. **Line 614 (hinic3_nic_io.c)**: Unterminated string literal: > ```c > PMD_DRV_LOG(ERR, "Set rq cqe context failed, > qid: %d, err: %d, status: 0x%x, out_size: 0x%x", > ``` > String cannot span lines like this. > > 2. **Line 666 (hinic3_nic_io.c)**: Redefinition of `nic_dev`: > ```c > hinic3_set_rq_enable(struct hinic3_nic_dev *nic_dev, uint16_t q_id, bool > enable) > { > struct hinic3_nic_dev *nic_dev = NULL; > ``` > > 3. **Line 669 (hinic3_nic_io.c)**: Undefined variable `dev` used: > ```c > if (!dev) > return -EINVAL; > ``` > > 4. **Missing function declaration** - `hinic3_nic_tx_rx_ops_init` is called but not > declared. > > ### Warnings > > 1. **Commit message**: "fun init" should be "func init" or "function init". > > 2. **Signed-off-by has leading space** in commit message. > > --- > > ## Patch 5/7: net/hinic3: add rx ops to support Compact CQE > > ### Errors > > 1. **Line 476 (hinic3_rx.h)**: Missing comma in function declaration: > ```c > hinic3_rx_get_compact_cqe_info(struct hinic3_rxq *rxq, volatile struct > hinic3_rq_cqe *rx_cqe > struct hinic3_cqe_info *cqe_info); > ``` > > 2. **Line 790 (hinic3_rx.c)**: `if` statement without space: > ```c > if(err) { > ``` > Should be `if (err)`. > > 3. **Undefined macro** `HINIC3_PKT_TX_TUNNEL_GENEVE` and > `HINIC3_PKT_TX_TUNNEL_IPIP` used but not defined. > > ### Warnings > > 1. **Inconsistent brace style** - some functions use opening brace on same line, > others on new line. > > --- > > ## Patch 6/7: net/hinic3: add tx ops to support Compact CQE > > ### Errors > > 1. **Line 379 (hinic3_tx.c)**: Undefined variable `wqe`: > ```c > wqe->task->pkt_info0 = > ``` > Should be `task->pkt_info0`. > > 2. **Line 606 (hinic3_tx.c)**: Typo in function name: > ```c > return inic3_non_tso_pkt_pre_process(mbuf, wqe_info); > ``` > Should be `hinic3_non_tso_pkt_pre_process`. > > 3. **Line 328 (hinic3_tx.h)**: Duplicate enum value: > ```c > enum sq_wqe_data_format { > SQ_NORMAL_WQE = 0, > ``` > `SQ_NORMAL_WQE` is already defined in `enum sq_wqe_type`. > > ### Warnings > > 1. **Missing `RTE_MBUF_F_TX_OUTER_UDP_CKSUM`** - > `HINIC3_PKT_TX_OUTER_UDP_CKSUM` is used but the corresponding RTE flag > mapping is missing. > > --- > > ## Patch 7/7: net/hinic3: use different callback func to support htn fdir > > ### Errors > > 1. **Line 926 (hinic3_fdir.c)**: Wrong parameter type: > ```c > hinic3_tcam_alloc_index(struct hinic3_nic_dev nic_dev, uint16_t *block_id) > ``` > Should be `struct hinic3_nic_dev *nic_dev` (pointer). > > 2. **Line 942 (hinic3_fdir.c)**: Same issue - missing pointer for `nic_dev` > parameter. > > 3. **Line 656 (hinic3_fdir.h)**: Missing semicolon at end of function declaration: > ```c > void hinic3_tcam_index_free(struct hinic3_nic_dev nic_dev, uint16_t index, > uint16_t *block_id) > ``` > > 4. **Line 750 (hinic3_fdir.c)**: Missing closing brace: > ```c > if (index == HINIC3_TCAM_INVALID_INDEX) { > return -ENOMEM; > } > ``` > The outer `if` block is not closed properly. > > ### Warnings > > 1. **Inconsistent parameter types** - Some functions use `struct hinic3_nic_dev` > while others use `struct hinic3_nic_dev *`. > > --- > > ## Summary > > This patch series has significant issues that need to be addressed before it can be > accepted: > > 1. **Compilation errors** - Multiple syntax errors prevent building 2. **Undefined > symbols** - References to macros and functions that don't exist 3. **Type > inconsistencies** - Mix of kernel-style types (`u32`) and standard types (`uint32_t`) > 4. **Missing build system updates** - New directories not integrated into meson > build 5. **Documentation gaps** - New APIs lack Doxygen comments > > The patches should be thoroughly tested for compilation before resubmission. ^ permalink raw reply [flat|nested] 80+ messages in thread
end of thread, other threads:[~2026-03-25 14:02 UTC | newest] Thread overview: 80+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-31 10:05 [PATCH 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-01-31 10:05 ` [PATCH 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-16 13:43 ` [V2 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-16 13:43 ` [V2 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-16 13:43 ` [V2 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-16 13:43 ` [V2 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-16 13:43 ` [V2 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-16 13:43 ` [V2 5/7] net/hinic3: add rx " Feifei Wang 2026-03-16 13:43 ` [V2 6/7] net/hinic3: add tx " Feifei Wang 2026-03-16 13:43 ` [V2 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-16 15:45 ` [V2 0/7] hinic3 change for support new SPx NIC Stephen Hemminger 2026-03-19 2:50 ` 回复: " wangfeifei (J) 2026-03-19 13:52 ` [v6 " Feifei Wang 2026-03-19 13:52 ` [V6 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-19 13:52 ` [V6 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-19 13:52 ` [V6 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-19 13:52 ` [V6 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-19 13:52 ` [V6 5/7] net/hinic3: add rx " Feifei Wang 2026-03-19 13:52 ` [V6 6/7] net/hinic3: add tx " Feifei Wang 2026-03-19 13:52 ` [V6 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-21 17:32 ` [v6 0/7] hinic3 change for support new SPx NIC Stephen Hemminger 2026-03-22 16:32 ` Stephen Hemminger 2026-03-23 8:04 ` [PATCH v7 " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-23 19:51 ` Stephen Hemminger 2026-03-23 8:04 ` [PATCH v7 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-23 8:04 ` [PATCH v7 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-23 8:04 ` [PATCH v7 5/7] net/hinic3: add rx " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 6/7] net/hinic3: add tx " Feifei Wang 2026-03-23 8:04 ` [PATCH v7 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-23 19:50 ` Stephen Hemminger 2026-03-24 1:19 ` 回复: " wangfeifei (J) 2026-03-24 1:55 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-24 1:55 ` [PATCH v8 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-24 1:55 ` [PATCH v8 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-24 1:55 ` [PATCH v8 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-24 1:55 ` [PATCH v8 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-24 1:55 ` [PATCH v8 5/7] net/hinic3: add rx " Feifei Wang 2026-03-24 1:55 ` [PATCH v8 6/7] net/hinic3: add tx " Feifei Wang 2026-03-24 1:55 ` [PATCH v8 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-24 3:27 ` [PATCH v8 0/7] hinic3 change for support new SPx NIC Stephen Hemminger 2026-03-24 3:31 ` Stephen Hemminger 2026-03-24 14:41 ` Stephen Hemminger 2026-03-24 14:42 ` Stephen Hemminger 2026-03-25 1:30 ` 回复: " wangfeifei (J) 2026-03-25 13:37 ` Thomas Monjalon 2026-03-25 14:02 ` Thomas Monjalon 2026-03-18 2:19 ` [v3 " Feifei Wang 2026-03-18 2:19 ` [V3 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 2:19 ` [V3 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-18 2:19 ` [V3 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-18 2:19 ` [V3 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-18 2:19 ` [V3 5/7] net/hinic3: add rx " Feifei Wang 2026-03-18 2:19 ` [V3 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 2:19 ` [V3 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-18 6:20 ` [v4 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 6:20 ` [V4 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 6:20 ` [V4 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-18 6:20 ` [V4 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-18 6:20 ` [V4 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-18 6:20 ` [V4 5/7] net/hinic3: add rx " Feifei Wang 2026-03-18 6:20 ` [V4 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 6:20 ` [V4 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-03-18 12:31 ` [v5 0/7] hinic3 change for support new SPx NIC Feifei Wang 2026-03-18 12:31 ` [V5 1/7] net/hinic3: add support for new SPx series NIC Feifei Wang 2026-03-18 12:31 ` [V5 2/7] net/hinic3: add enhance cmdq " Feifei Wang 2026-03-18 12:31 ` [V5 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-03-18 12:31 ` [V5 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-03-18 12:32 ` [V5 5/7] net/hinic3: add rx " Feifei Wang 2026-03-18 12:32 ` [V5 6/7] net/hinic3: add tx " Feifei Wang 2026-03-18 12:32 ` [V5 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-01-31 10:05 ` [PATCH 2/7] net/hinic3: add enhance cmdq support for new SPx series NIC Feifei Wang 2026-01-31 10:05 ` [PATCH 3/7] net/hinic3: use different callback func to split new/old cmdq operations Feifei Wang 2026-01-31 10:06 ` [PATCH 4/7] net/hinic3: add fun init ops to support Compact CQE Feifei Wang 2026-01-31 10:06 ` [PATCH 5/7] net/hinic3: add rx " Feifei Wang 2026-01-31 10:06 ` [PATCH 6/7] net/hinic3: add tx " Feifei Wang 2026-01-31 10:06 ` [PATCH 7/7] net/hinic3: use different callback func to support htn fdir Feifei Wang 2026-01-31 18:17 ` [REVIEW] net/hinic3: use different callback func to support htnfdir Stephen Hemminger 2026-02-02 1:17 ` 回复: " wangfeifei (J)
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox